performance changes on c9dc4c65: 9.8% fsmark.files_per_sec
by Yuanhan Liu
FYI, we found performance increasement, which is expected as commit patch says,
on `fsmark.files_per_sec' by c9dc4c6578502c2085705347375b82089aad18d0:
> commit c9dc4c6578502c2085705347375b82089aad18d0
> Author: Chris Mason <clm(a)fb.com>
> AuthorDate: Sat Apr 4 17:14:42 2015 -0700
> Commit: Chris Mason <clm(a)fb.com>
> CommitDate: Fri Apr 10 14:07:11 2015 -0700
>
> Btrfs: two stage dirty block group writeout
4c6d1d85ad89fd8e32dc9204b7f944854399bda9 c9dc4c6578502c2085705347375b82089aad18d0
---------------------------------------- ----------------------------------------
run time(m) metric_value ±stddev run time(m) metric_value ±stddev change testbox/benchmark/testcase-params
--- ------ ---------------------------- --- ------ ---------------------------- -------- ------------------------------
3 7.3 |35.267| ±0.5 5 6.6 |38.740| ±1.6 9.8% ivb44/fsmark/1x-1t-1HDD-btrfs-4M-60G-NoSync
NOTE: here are some more explanation about those test parameters for you to
know what the testcase does better:
1x: where 'x' means iterations or loop, corresponding to the 'L' option of fsmark
1t, 64t: where 't' means thread
4M: means the single file size, corresponding to the '-s' option of fsmark
60G: means the total test size
And FYI, here are more changes by the same commit:
4c6d1d85ad89fd8e c9dc4c6578502c208570534737
---------------- --------------------------
%stddev %change %stddev
\ | \
9864 ± 2% +156.9% 25345 ± 4% fsmark.time.voluntary_context_switches
9 ± 0% +17.8% 10 ± 4% fsmark.time.percent_of_cpu_this_job_got
462211 ± 1% +16.8% 539707 ± 0% fsmark.app_overhead
35.27 ± 0% +9.8% 38.74 ± 1% fsmark.files_per_sec
435 ± 0% -9.0% 396 ± 1% fsmark.time.elapsed_time.max
435 ± 0% -9.0% 396 ± 1% fsmark.time.elapsed_time
5.20 ± 2% -70.3% 1.54 ± 6% turbostat.Pkg%pc6
2447873 ± 42% -67.9% 785086 ± 33% numa-numastat.node1.numa_hit
2413662 ± 43% -68.1% 771115 ± 31% numa-numastat.node1.local_node
9864 ± 2% +156.9% 25345 ± 4% time.voluntary_context_switches
187680 ± 10% +126.8% 425676 ± 7% numa-vmstat.node1.nr_dirty
747361 ± 9% +127.8% 1702809 ± 7% numa-meminfo.node1.Dirty
1787510 ± 1% +117.0% 3878984 ± 2% meminfo.Dirty
446861 ± 1% +117.0% 969472 ± 2% proc-vmstat.nr_dirty
1655962 ± 37% -59.3% 673988 ± 29% numa-vmstat.node1.numa_local
1036191 ± 8% +110.3% 2179311 ± 3% numa-meminfo.node0.Dirty
259069 ± 8% +110.3% 544783 ± 3% numa-vmstat.node0.nr_dirty
1687987 ± 37% -58.6% 698626 ± 29% numa-vmstat.node1.numa_hit
1 ± 0% +100.0% 2 ± 0% vmstat.procs.b
0.02 ± 0% +100.0% 0.04 ± 22% turbostat.CPU%c3
6.03 ± 1% +76.9% 10.67 ± 1% turbostat.CPU%c1
5.189e+08 ± 0% +72.6% 8.956e+08 ± 1% cpuidle.C1-IVT.time
2646692 ± 7% +75.0% 4630890 ± 23% cpuidle.C3-IVT.time
5301 ± 6% -31.7% 3620 ± 3% slabinfo.btrfs_ordered_extent.active_objs
10549 ± 16% -30.3% 7349 ± 12% numa-vmstat.node1.nr_slab_reclaimable
5353 ± 6% -31.4% 3670 ± 3% slabinfo.btrfs_ordered_extent.num_objs
42169 ± 16% -30.3% 29397 ± 12% numa-meminfo.node1.SReclaimable
1619825 ± 22% +39.4% 2258188 ± 4% proc-vmstat.pgfree
4611 ± 7% -28.0% 3318 ± 1% slabinfo.btrfs_delayed_ref_head.num_objs
4471 ± 8% -27.0% 3264 ± 2% slabinfo.btrfs_delayed_ref_head.active_objs
67.93 ± 1% -24.7% 51.15 ± 4% turbostat.Pkg%pc2
2332975 ± 21% +45.6% 3396446 ± 4% numa-vmstat.node1.numa_other
2300949 ± 22% +46.5% 3371807 ± 4% numa-vmstat.node1.numa_miss
2300941 ± 22% +46.5% 3371793 ± 4% numa-vmstat.node0.numa_foreign
2952 ± 8% -23.3% 2263 ± 3% slabinfo.btrfs_delayed_data_ref.num_objs
2570716 ± 3% +25.7% 3230157 ± 2% numa-meminfo.node1.Writeback
642367 ± 3% +25.7% 807533 ± 2% numa-vmstat.node1.nr_writeback
95408 ± 13% -17.3% 78910 ± 6% numa-meminfo.node1.Slab
2803 ± 7% -21.1% 2210 ± 3% slabinfo.btrfs_delayed_data_ref.active_objs
240 ± 9% +23.1% 295 ± 16% numa-vmstat.node0.nr_page_table_pages
4626942 ± 19% +49.6% 6924087 ± 22% cpuidle.C1E-IVT.time
5585235 ± 0% +25.5% 7011242 ± 0% meminfo.Writeback
1396232 ± 0% +25.5% 1752892 ± 0% proc-vmstat.nr_writeback
962 ± 9% +23.0% 1184 ± 16% numa-meminfo.node0.PageTables
9 ± 0% +17.8% 10 ± 4% time.percent_of_cpu_this_job_got
754027 ± 2% +25.2% 944312 ± 1% numa-vmstat.node0.nr_writeback
3018674 ± 2% +25.1% 3777338 ± 1% numa-meminfo.node0.Writeback
23509 ± 1% -16.9% 19530 ± 0% slabinfo.kmalloc-1024.active_objs
2972 ± 1% +21.4% 3607 ± 0% proc-vmstat.nr_alloc_batch
13956 ± 4% -15.6% 11773 ± 8% slabinfo.kmalloc-192.active_objs
743 ± 1% -16.0% 624 ± 0% slabinfo.kmalloc-1024.active_slabs
743 ± 1% -16.0% 624 ± 0% slabinfo.kmalloc-1024.num_slabs
23790 ± 1% -16.0% 19983 ± 0% slabinfo.kmalloc-1024.num_objs
68983 ± 2% +19.1% 82190 ± 4% softirqs.RCU
222 ± 11% +47.0% 326 ± 25% cpuidle.POLL.usage
14177 ± 0% +17.8% 16702 ± 1% slabinfo.kmalloc-2048.num_objs
14045 ± 0% +18.0% 16568 ± 1% slabinfo.kmalloc-2048.active_objs
885 ± 0% +17.8% 1043 ± 1% slabinfo.kmalloc-2048.num_slabs
885 ± 0% +17.8% 1043 ± 1% slabinfo.kmalloc-2048.active_slabs
14025 ± 4% -13.3% 12157 ± 7% slabinfo.kmalloc-192.num_objs
8287205 ± 10% +16.0% 9611684 ± 0% numa-numastat.node0.numa_hit
8276795 ± 10% +15.9% 9592682 ± 0% numa-numastat.node0.local_node
2615463 ± 5% -9.6% 2365256 ± 2% numa-vmstat.node1.nr_written
1814 ± 5% -12.7% 1584 ± 11% numa-meminfo.node1.PageTables
453 ± 5% -12.6% 396 ± 11% numa-vmstat.node1.nr_page_table_pages
105943 ± 6% +13.6% 120352 ± 2% numa-meminfo.node0.SReclaimable
26492 ± 6% +13.6% 30086 ± 2% numa-vmstat.node0.nr_slab_reclaimable
0.41 ± 1% +17.1% 0.48 ± 4% time.user_time
2155 ± 4% -11.1% 1916 ± 5% slabinfo.btrfs_delayed_tree_ref.active_objs
2.028e+10 ± 0% -11.1% 1.803e+10 ± 1% cpuidle.C6-IVT.time
2155 ± 4% -10.8% 1922 ± 5% slabinfo.btrfs_delayed_tree_ref.num_objs
1202 ± 4% -11.2% 1067 ± 9% slabinfo.btrfs_trans_handle.num_objs
1202 ± 4% -11.2% 1067 ± 9% slabinfo.btrfs_trans_handle.active_objs
192641 ± 5% +9.8% 211569 ± 2% numa-meminfo.node0.Slab
268137 ± 0% +12.2% 300911 ± 2% cpuidle.C6-IVT.usage
435 ± 0% -9.0% 396 ± 1% time.elapsed_time
435 ± 0% -9.0% 396 ± 1% time.elapsed_time.max
21057 ± 0% -9.0% 19165 ± 1% uptime.idle
29.89 ± 0% +37.2% 41.01 ± 3% turbostat.CorWatt
59.95 ± 0% +19.6% 71.69 ± 2% turbostat.PkgWatt
18873 ± 0% +14.9% 21692 ± 1% vmstat.system.cs
21 ± 2% +8.8% 23 ± 3% turbostat.Avg_MHz
135 ± 0% +9.1% 147 ± 0% iostat.sda.avgqu-sz
0.69 ± 2% +7.2% 0.74 ± 3% turbostat.%Busy
7478 ± 0% +5.1% 7861 ± 0% iostat.sda.await
7478 ± 0% +5.1% 7861 ± 0% iostat.sda.w_await
239 ± 0% +3.6% 247 ± 0% iostat.sda.wrqm/s
3.54 ± 0% +3.9% 3.68 ± 0% turbostat.RAMWatt
129619 ± 0% +3.2% 133743 ± 0% vmstat.io.bo
128667 ± 0% +1.9% 131056 ± 0% iostat.sda.wkB/s
--yliu
7 years, 1 month
performance changes on 78373b73: -46.6% fsmark.files_per_sec, and few more
by Yuanhan Liu
FYI, we found changes on `fsmark.files_per_sec' by 78373b7319abdf15050af5b1632c4c8b8b398f33:
> commit 78373b7319abdf15050af5b1632c4c8b8b398f33
> Author: Jaegeuk Kim <jaegeuk(a)kernel.org>
> AuthorDate: Fri Mar 13 21:44:36 2015 -0700
> Commit: Jaegeuk Kim <jaegeuk(a)kernel.org>
> CommitDate: Fri Apr 10 15:08:45 2015 -0700
>
> f2fs: enhance multi-threads performance
3402e87cfb5e762f9c95071bf4a2ad65fd9392a2 78373b7319abdf15050af5b1632c4c8b8b398f33
---------------------------------------- ----------------------------------------
run time(m) metric_value ±stddev run time(m) metric_value ±stddev change testbox/benchmark/testcase-params
--- ------ ---------------------------- --- ------ ---------------------------- -------- ------------------------------
3 0.3 |490.800| ±5.7 3 0.5 |262.067| ±0.4 -46.6% ivb44/fsmark/1x-64t-4BRD_12G-RAID0-f2fs-4M-30G-fsyncBeforeClose
3 0.3 |468.367| ±3.5 3 0.5 |264.467| ±0.2 -43.5% ivb44/fsmark/1x-64t-9BRD_6G-RAID0-f2fs-4M-30G-fsyncBeforeClose
3 0.6 |211.867| ±0.7 3 0.7 |191.067| ±0.5 -9.8% ivb44/fsmark/1x-64t-4BRD_12G-RAID5-f2fs-4M-30G-fsyncBeforeClose
NOTE: here are some more info about those test parameters for you to
know what the testcase does better:
1x: where 'x' means iterations or loop, corresponding to the 'L' option of fsmark
1t, 64t: where 't' means thread
4M: means the single file size, corresponding to the '-s' option of fsmark
40G, 30G, 120G: means the total test size
4BRD_12G: BRD is the ramdisk, where '4' means 4 ramdisk, and where '12G' means
the size of one ramdisk. So, it would be 48G in total. And we made a
raid on those ramdisk
The change is a bit interesting as you already stated it clear that this
patch is for performance gain. The patch itself is clear, too: remove a
mutex lock. So the only reasonable cause, without too much dig, I can think
of would be the remove of this lock reduces sleep time, and brings more
process to be able run, but somehow increases the context switches and cpu
usage in the meantime at somewhere. I guess this is what the following
changes are trying to tell us:
29708 ± 2% +5720.0% 1729051 ± 1% fsmark.time.voluntary_context_switches
302 ± 0% +113.8% 647 ± 0% fsmark.time.percent_of_cpu_this_job_got
61.05 ± 0% +214.0% 191.70 ± 0% fsmark.time.system_time
FYI, Here I listed all changes for the outstanding change:
3 0.3 |490.800| ±5.7 3 0.5 |262.067| ±0.4 -46.6% ivb44/fsmark/1x-64t-4BRD_12G-RAID0-f2fs-4M-30G-fsyncBeforeClose
3402e87cfb5e762f 78373b7319abdf15050af5b163
---------------- --------------------------
%stddev %change %stddev
\ | \
29708 ± 2% +5720.0% 1729051 ± 1% fsmark.time.voluntary_context_switches
61.05 ± 0% +214.0% 191.70 ± 0% fsmark.time.system_time
302 ± 0% +113.8% 647 ± 0% fsmark.time.percent_of_cpu_this_job_got
10476 ± 0% +95.4% 20467 ± 5% fsmark.time.minor_page_faults
490 ± 5% -46.6% 262 ± 0% fsmark.files_per_sec
20.21 ± 0% +46.7% 29.65 ± 0% fsmark.time.elapsed_time
20.21 ± 0% +46.7% 29.65 ± 0% fsmark.time.elapsed_time.max
226379 ± 0% +32.5% 299882 ± 0% fsmark.app_overhead
0 ± 0% +Inf% 1045 ± 2% proc-vmstat.numa_pages_migrated
209 ± 26% +3272.3% 7059 ± 3% cpuidle.C1E-IVT.usage
228 ± 42% +686.7% 1799 ± 14% numa-meminfo.node0.Writeback
14633 ± 5% +7573.2% 1122849 ± 1% cpuidle.C1-IVT.usage
0 ± 0% +Inf% 1045 ± 2% proc-vmstat.pgmigrate_success
29708 ± 2% +5720.0% 1729051 ± 1% time.voluntary_context_switches
55663 ± 0% +776.9% 488081 ± 0% cpuidle.C6-IVT.usage
56 ± 42% +718.8% 464 ± 11% numa-vmstat.node0.nr_writeback
535 ± 29% +334.4% 2325 ± 10% meminfo.Writeback
129 ± 30% +295.6% 511 ± 4% proc-vmstat.nr_writeback
59.25 ± 5% -74.2% 15.26 ± 3% turbostat.CPU%c6
2.58 ± 8% -74.5% 0.66 ± 11% turbostat.Pkg%pc2
1.551e+08 ± 14% +233.4% 5.171e+08 ± 4% cpuidle.C1-IVT.time
32564 ± 24% +208.1% 100330 ± 5% softirqs.RCU
61.05 ± 0% +214.0% 191.70 ± 0% time.system_time
60 ± 32% +165.7% 160 ± 16% numa-vmstat.node1.nr_writeback
2 ± 0% +200.0% 6 ± 0% vmstat.procs.r
3057 ± 2% +166.1% 8136 ± 22% numa-vmstat.node0.nr_mapped
12240 ± 2% +165.9% 32547 ± 22% numa-meminfo.node0.Mapped
6324 ± 3% +148.4% 15709 ± 0% proc-vmstat.nr_mapped
25318 ± 3% +148.6% 62931 ± 0% meminfo.Mapped
302 ± 0% +113.8% 647 ± 0% time.percent_of_cpu_this_job_got
34.39 ± 8% +102.9% 69.79 ± 0% turbostat.CPU%c1
3247 ± 32% +124.9% 7303 ± 14% numa-vmstat.node0.nr_active_anon
12994 ± 32% +124.8% 29215 ± 14% numa-meminfo.node0.Active(anon)
3256 ± 4% +132.5% 7569 ± 23% numa-vmstat.node1.nr_mapped
3252 ± 32% +124.6% 7304 ± 14% numa-vmstat.node0.nr_anon_pages
13036 ± 4% +132.3% 30279 ± 23% numa-meminfo.node1.Mapped
13033 ± 31% +124.1% 29214 ± 14% numa-meminfo.node0.AnonPages
10476 ± 0% +95.4% 20467 ± 5% time.minor_page_faults
121752 ± 7% -35.8% 78122 ± 2% numa-meminfo.node0.Dirty
30381 ± 7% -36.0% 19456 ± 2% numa-vmstat.node0.nr_dirty
113996 ± 4% +64.4% 187416 ± 0% proc-vmstat.pgfault
59301 ± 1% +60.9% 95414 ± 1% softirqs.SCHED
1.92 ± 1% -31.9% 1.31 ± 4% turbostat.Pkg%pc6
16 ± 35% +54.2% 24 ± 10% cpuidle.POLL.usage
20.21 ± 0% +46.7% 29.65 ± 0% time.elapsed_time.max
20.21 ± 0% +46.7% 29.65 ± 0% time.elapsed_time
7114 ± 2% +46.3% 10405 ± 0% proc-vmstat.nr_active_anon
28499 ± 2% +46.2% 41659 ± 0% meminfo.Active(anon)
28617 ± 2% +45.7% 41692 ± 0% meminfo.AnonPages
7138 ± 2% +45.7% 10403 ± 0% proc-vmstat.nr_anon_pages
95342 ± 2% +44.9% 138177 ± 6% numa-meminfo.node0.Active
82347 ± 2% +32.3% 108962 ± 9% numa-meminfo.node0.Active(file)
20586 ± 2% +32.3% 27240 ± 9% numa-vmstat.node0.nr_active_file
233023 ± 3% -30.7% 161486 ± 0% meminfo.Dirty
0.19 ± 2% +36.2% 0.26 ± 3% time.user_time
57900 ± 3% -30.5% 40219 ± 0% proc-vmstat.nr_dirty
108006 ± 8% -24.6% 81460 ± 2% numa-meminfo.node1.Dirty
26896 ± 8% -24.4% 20331 ± 2% numa-vmstat.node1.nr_dirty
219977 ± 0% +32.9% 292456 ± 1% softirqs.TIMER
197382 ± 1% +32.6% 261733 ± 0% meminfo.Active
42210 ± 1% +30.2% 54947 ± 0% proc-vmstat.nr_active_file
168883 ± 1% +30.3% 220073 ± 0% meminfo.Active(file)
353 ± 4% +21.7% 430 ± 3% numa-vmstat.node0.nr_page_table_pages
1425 ± 4% +20.2% 1714 ± 4% numa-meminfo.node0.PageTables
86494 ± 2% +28.0% 110745 ± 10% numa-meminfo.node1.Active(file)
21621 ± 2% +28.0% 27683 ± 10% numa-vmstat.node1.nr_active_file
46.00 ± 4% +20.8% 55.56 ± 3% uptime.boot
101939 ± 4% +20.8% 123117 ± 8% numa-meminfo.node1.Active
1412 ± 9% +10.8% 1565 ± 8% numa-vmstat.node0.nr_alloc_batch
90644 ± 6% +22.4% 110984 ± 0% proc-vmstat.pgfree
2084 ± 4% +15.7% 2412 ± 3% uptime.idle
18666 ± 1% +9.2% 20390 ± 4% slabinfo.anon_vma.num_objs
18663 ± 1% +9.3% 20390 ± 4% slabinfo.anon_vma.active_objs
435.51 ± 0% -89.9% 43.95 ± 0% iostat.md0.avgrq-sz
6415 ± 0% +875.4% 62579 ± 1% iostat.md0.w/s
5595 ± 1% +1883.4% 110979 ± 1% vmstat.system.cs
280678 ± 0% +315.2% 1165516 ± 0% iostat.md0.avgqu-sz
6.31 ± 0% +123.9% 14.14 ± 0% turbostat.%Busy
195 ± 0% +116.4% 423 ± 0% turbostat.Avg_MHz
26035 ± 0% +76.8% 46037 ± 0% vmstat.system.in
1370424 ± 0% -28.1% 985009 ± 0% vmstat.io.bo
1459306 ± 0% -28.1% 1048622 ± 0% iostat.md0.wkB/s
72.28 ± 0% +29.6% 93.70 ± 0% turbostat.CorWatt
102 ± 0% +21.4% 124 ± 0% turbostat.PkgWatt
6.66 ± 2% +8.1% 7.19 ± 0% turbostat.RAMWatt
7 years, 1 month
[kernel/printk/printk.c] d18bbc215f8: Out of memory: Kill process 457 (rc) score 7 or sacrifice child
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit d18bbc215f81710e1eab7120becafa910554d68d ("kernel/printk/printk.c: revert "printk: enable interrupts before calling console_trylock_for_printk()"")
+----------------------------------------------------+------------+------------+
| | e84f1ab33c | d18bbc215f |
+----------------------------------------------------+------------+------------+
| boot_successes | 0 | 6 |
| boot_failures | 32 | 24 |
| BUG:spinlock_lockup_suspected_on_CPU | 32 | |
| INFO:possible_circular_locking_dependency_detected | 32 | |
| INFO:rcu_preempt_self-detected_stall_on_CPU | 31 | |
| RIP:flat_send_IPI_mask | 31 | |
| backtrace:do_sys_open | 32 | 7 |
| backtrace:SyS_open | 32 | |
| backtrace:kernel_init_freeable | 32 | |
| backtrace:print_ICs | 32 | |
| backtrace:setup_default_timer_irq | 32 | |
| backtrace:hpet_time_init | 32 | |
| backtrace:x86_late_time_init | 32 | |
| Out_of_memory:Kill_process | 0 | 24 |
| backtrace:compat_SyS_open | 0 | 7 |
| backtrace:sock_create | 0 | 4 |
| backtrace:SyS_socket | 0 | 5 |
| backtrace:compat_SyS_socketcall | 0 | 5 |
| page_allocation_failure:order:#,mode | 0 | 2 |
| backtrace:vmalloc | 0 | 2 |
| backtrace:SyS_init_module | 0 | 2 |
| WDT_device_closed_unexpectedly.WDT_will_not_stop | 0 | 3 |
| backtrace:do_fork | 0 | 4 |
| backtrace:SyS_clone | 0 | 3 |
| backtrace:__mm_populate | 0 | 4 |
| backtrace:SyS_mlockall | 0 | 4 |
| backtrace:compat_SyS_openat | 0 | 1 |
| backtrace:sock_alloc_file | 0 | 1 |
+----------------------------------------------------+------------+------------+
We found the commit fixed the problem in the parent commit as below (attached as
dmesg_of_the_parent_commit).
[ 2.530427] BUG: spinlock lockup suspected on CPU#0, swapper/1
[ 2.530427] lock: serial8250_ports+0x0/0x700, .magic: dead4ead, .owner: swapper/1, .owner_cpu: 0
[ 2.530427] CPU: 0 PID: 1 Comm: swapper Not tainted 3.16.0-rc3-00074-ge84f1ab #16
[ 2.530427] 0000000000000000 ffff8800003b7650 ffffffff816d97b8 ffff8800003b7670
[ 2.530427] ffffffff810989a0 ffffffff82dc7d80 00000000a08baba0 ffff8800003b7698
[ 2.530427] ffffffff81098b35 ffffffff82dc7d80 0000000000000046 0000000000000000
[ 2.530427] Call Trace:
[ 2.530427] [<ffffffff816d97b8>] dump_stack+0x19/0x1b
[ 2.530427] [<ffffffff810989a0>] spin_dump+0x90/0x95
[ 2.530427] [<ffffffff81098b35>] do_raw_spin_lock+0x108/0x128
[ 2.530427] [<ffffffff816e16ac>] _raw_spin_lock_irqsave+0x4f/0x5b
[ 2.530427] [<ffffffff813ca0ac>] ? serial8250_console_write+0xaf/0x15b
[ 2.530427] [<ffffffff813ca0ac>] serial8250_console_write+0xaf/0x15b
[ 2.530427] [<ffffffff8109d72d>] call_console_drivers+0xc6/0x175
[ 2.530427] [<ffffffff8109e7e8>] console_unlock+0x2e2/0x3ca
[ 2.530427] [<ffffffff8109ed5b>] vprintk_emit+0x48b/0x49f
[ 2.530427] [<ffffffff816d78eb>] printk+0x4f/0x57
[ 2.530427] [<ffffffff816d6f65>] print_circular_bug_header+0x3c/0xd8
[ 2.530427] [<ffffffff816d707f>] print_circular_bug+0x7e/0x2bf
[ 2.530427] [<ffffffff810951fc>] __lock_acquire+0x13dd/0x1a6d
[ 2.530427] [<ffffffff810a0200>] ? irq_alloc_hwirqs+0x95/0x95
[ 2.530427] [<ffffffff81095252>] ? __lock_acquire+0x1433/0x1a6d
[ 2.530427] [<ffffffff81095f8d>] lock_acquire+0xc6/0x1a9
[ 2.530427] [<ffffffff81095f8d>] ? lock_acquire+0xc6/0x1a9
[ 2.530427] [<ffffffff810a02aa>] ? __irq_get_desc_lock+0x8a/0x9a
[ 2.530427] [<ffffffff816e16a4>] _raw_spin_lock_irqsave+0x47/0x5b
[ 2.530427] [<ffffffff810a02aa>] ? __irq_get_desc_lock+0x8a/0x9a
[ 2.530427] [<ffffffff810a02aa>] __irq_get_desc_lock+0x8a/0x9a
[ 2.530427] [<ffffffff810a15c3>] __disable_irq_nosync+0x23/0x56
[ 2.530427] [<ffffffff810a1604>] disable_irq_nosync+0xe/0x10
[ 2.530427] [<ffffffff813cc1c1>] serial8250_startup+0x287/0x771
[ 2.530427] [<ffffffff813c8496>] uart_startup+0x9a/0x1b8
[ 2.530427] [<ffffffff813c8e7e>] uart_open+0x11f/0x164
[ 2.530427] [<ffffffff813b83dd>] tty_open+0x3f9/0x633
[ 2.530427] [<ffffffff81157d10>] chrdev_open+0x135/0x149
[ 2.530427] [<ffffffff81157bdb>] ? cdev_put+0x15/0x15
[ 2.530427] [<ffffffff81151d9a>] do_dentry_open+0x2d7/0x386
[ 2.530427] [<ffffffff81151e99>] finish_open+0x50/0x64
[ 2.530427] [<ffffffff81161d17>] do_last+0x974/0xc2a
[ 2.530427] [<ffffffff8116229c>] path_openat+0x2cf/0x782
[ 2.530427] [<ffffffff8116372e>] do_filp_open+0x4b/0xf1
[ 2.530427] [<ffffffff816e17a9>] ? _raw_spin_unlock+0x47/0x53
[ 2.530427] [<ffffffff81171c00>] ? __alloc_fd+0x155/0x167
[ 2.530427] [<ffffffff81152e15>] do_sys_open+0x166/0x20e
[ 2.530427] [<ffffffff81152e15>] ? do_sys_open+0x166/0x20e
[ 2.530427] [<ffffffff813db22b>] ? random_int_secret_init+0x1a/0x1e
[ 2.530427] [<ffffffff81152edb>] SyS_open+0x1e/0x20
[ 2.530427] [<ffffffff81fed20c>] kernel_init_freeable+0x1ec/0x255
[ 2.530427] [<ffffffff816d48ed>] ? rest_init+0x131/0x131
[ 2.530427] [<ffffffff816d48fb>] kernel_init+0xe/0xda
[ 2.530427] [<ffffffff816e243a>] ret_from_fork+0x7a/0xb0
[ 2.530427] [<ffffffff816d48ed>] ? rest_init+0x131/0x131
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month
[PM / sleep] 38106313324: -48.0% turbostat.PkgWatt
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 3810631332465d967ba5e27ea2c7dff2c9afac6c ("PM / sleep: Re-implement suspend-to-idle handling")
testbox/testcase/testparams: lkp-sb03/suspend/300s-30x-freeze
18320f2a6871aaf2 3810631332465d967ba5e27ea2
---------------- --------------------------
%stddev %change %stddev
\ | \
212 ± 5% -23.3% 162 ± 6% numa-vmstat.node1.nr_kernel_stack
3418 ± 5% -23.6% 2610 ± 6% numa-meminfo.node1.KernelStack
32165483 ± 26% -31.3% 22108958 ± 7% cpuidle.C1E-SNB.time
9.401e+09 ± 1% +36.4% 1.283e+10 ± 0% cpuidle.C7-SNB.time
1137 ± 3% -7.0% 1057 ± 5% slabinfo.blkdev_requests.active_objs
1137 ± 3% -7.0% 1057 ± 5% slabinfo.blkdev_requests.num_objs
50.02 ± 1% -69.5% 15.26 ± 4% turbostat.CPU%c1
49.38 ± 1% +70.5% 84.19 ± 0% turbostat.CPU%c7
12.77 ± 3% -59.0% 5.24 ± 1% turbostat.CorWatt
7.83 ± 11% +545.0% 50.49 ± 0% turbostat.Pkg%pc6
38.91 ± 1% -48.0% 20.22 ± 0% turbostat.PkgWatt
90 ± 31% -34.7% 58 ± 20% sched_debug.cfs_rq[14]:/.tg_runnable_contrib
20 ± 46% -59.3% 8 ± 33% sched_debug.cfs_rq[14]:/.runnable_load_avg
4196 ± 31% -34.6% 2745 ± 20% sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
3489 ± 38% -33.2% 2329 ± 11% sched_debug.cfs_rq[15]:/.avg->runnable_avg_sum
2418 ± 43% +54.7% 3741 ± 21% sched_debug.cfs_rq[17]:/.avg->runnable_avg_sum
447 ± 28% +48.5% 663 ± 11% sched_debug.cfs_rq[17]:/.tg_load_contrib
434 ± 31% +45.7% 633 ± 13% sched_debug.cfs_rq[17]:/.blocked_load_avg
51 ± 44% +54.4% 79 ± 21% sched_debug.cfs_rq[17]:/.tg_runnable_contrib
9268 ± 26% -45.4% 5056 ± 46% sched_debug.cfs_rq[1]:/.min_vruntime
3653 ± 42% +96.0% 7162 ± 13% sched_debug.cfs_rq[24]:/.min_vruntime
4355 ± 14% +51.2% 6584 ± 22% sched_debug.cfs_rq[25]:/.min_vruntime
6 ± 36% -55.6% 3 ± 40% sched_debug.cfs_rq[27]:/.runnable_load_avg
589 ± 10% +24.4% 733 ± 12% sched_debug.cfs_rq[31]:/.tg_load_contrib
585 ± 10% +24.1% 727 ± 11% sched_debug.cfs_rq[31]:/.blocked_load_avg
2484 ± 30% +81.1% 4498 ± 20% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
52 ± 30% +83.9% 97 ± 20% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
14630 ± 41% -56.5% 6370 ± 40% sched_debug.cfs_rq[7]:/.min_vruntime
58 ± 8% -11.6% 51 ± 9% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
2731 ± 8% -12.1% 2402 ± 8% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
18774 ± 43% -60.3% 7460 ± 21% sched_debug.cpu#1.sched_count
16 ± 4% -25.4% 12 ± 18% sched_debug.cpu#10.cpu_load[3]
9 ± 8% -28.2% 7 ± 17% sched_debug.cpu#10.cpu_load[4]
32 ± 21% -33.3% 21 ± 28% sched_debug.cpu#13.cpu_load[1]
28 ± 4% -28.3% 20 ± 25% sched_debug.cpu#13.cpu_load[2]
29 ± 43% -58.0% 12 ± 20% sched_debug.cpu#14.cpu_load[3]
36 ± 26% -42.5% 21 ± 13% sched_debug.cpu#14.cpu_load[1]
215 ± 14% +53.6% 330 ± 9% sched_debug.cpu#14.curr->pid
34 ± 31% -47.8% 18 ± 18% sched_debug.cpu#14.cpu_load[2]
2431 ± 3% +25.3% 3046 ± 20% sched_debug.cpu#16.sched_count
1440 ± 15% +45.3% 2092 ± 34% sched_debug.cpu#16.ttwu_count
2233 ± 5% +12.5% 2512 ± 7% sched_debug.cpu#20.nr_switches
826 ± 16% +65.9% 1370 ± 31% sched_debug.cpu#21.ttwu_count
992 ± 6% +18.0% 1170 ± 4% sched_debug.cpu#23.sched_goidle
308 ± 13% +45.2% 448 ± 19% sched_debug.cpu#23.ttwu_local
2117 ± 5% +14.6% 2426 ± 4% sched_debug.cpu#23.sched_count
2072 ± 6% +14.6% 2373 ± 5% sched_debug.cpu#23.nr_switches
8 ± 44% +61.8% 13 ± 10% sched_debug.cpu#24.nr_uninterruptible
800 ± 17% -26.0% 592 ± 6% sched_debug.cpu#28.sched_goidle
5 ± 37% +195.0% 14 ± 46% sched_debug.cpu#30.cpu_load[1]
9 ± 33% +62.2% 15 ± 12% sched_debug.cpu#31.cpu_load[2]
28 ± 43% +64.9% 47 ± 42% sched_debug.cpu#8.cpu_load[1]
17 ± 8% -21.1% 14 ± 18% sched_debug.cpu#9.cpu_load[3]
lkp-sb03: Sandy Bridge-EP
Memory: 64G
turbostat.CPU%c1
60 ++---------------------------------------------------------------------+
| .*..* |
50 *+*..* + .*.*..*. .*..*.*..*.*..*.*..*.*.*.. |
| *. *.*..* * |
| |
40 ++ |
| |
30 ++ |
| |
20 ++ |
| O O O O O O O O O O O O O O |
O O O O O O O O O O O O O O
10 ++ |
| O O |
0 ++---------------------------------------------------------------------+
turbostat.CPU%c7
100 ++--------------------------------------------------------------------+
| O O |
90 ++ |
O O O O |
| O O O O O O O O O O O O O O O O O O O O O O O O
80 ++ |
| |
70 ++ |
| |
60 ++ |
| |
| *. .*. .*. |
50 *+*..*. + *..*.*. *. *. .*.*..*.*.*..*.*..*.* |
| *..* *. |
40 ++--------------------------------------------------------------------+
turbostat.Pkg%pc6
55 ++---------------------------O-----------------------------------------+
50 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O
| O |
45 ++ |
40 ++ |
| |
35 ++ |
30 ++ |
25 ++ |
| |
20 ++ |
15 ++ |
| .*.. .* |
10 *+ .*.*..* *. .*.*.*. + .*..*.*.. .*..*.*.*..* |
5 ++*--------------*-----------*--*---------*----------------------------+
turbostat.PkgWatt
45 ++-----*---------------------------------------------------------------+
*. .* * .*. |
40 ++*. + .*.. .*. *.. |
| *..*.*..*.* * *.*..*.*..*.*.*..* |
| |
35 ++ |
| |
30 ++ |
| |
25 ++ |
| |
| O |
20 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
15 ++---------------------------------------------------------------------+
turbostat.CorWatt
20 ++---------------------------------------------------------------------+
| .*.. |
18 ++ * |
16 *+ .. * *. |
| * + .. *.. |
14 ++ *.. .*..*.*.*..*.* .*..*. .* |
12 ++ * *.*..* *.*. |
| |
10 ++ |
8 ++ |
| |
6 O+O O O O O O O O O O O O O O O O O O O O O |
4 ++ O O O O O O O
| O |
2 ++---------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month
Re: [LKP] [ocfs2] e2ac55b6a8e: ocfs2_init:1612 ERROR: Unable to create ocfs2 debugfs root
by Linus Torvalds
On Sun, Apr 19, 2015 at 8:45 PM, Chengyu Song <csong84(a)gatech.edu> wrote:
>
> As suggested in the patch, -19 (-ENODEV) happens when debugfs is not configured (see include/linux/debugfs.h). So if debugfs is necessary for the functionality, in Kconfig, we should either declare it as a dependency, or auto select it.
That makes no sense.
If it used to work before that patch, then this is a regression and
the patch needs to be reverted.
Yes, the old code apparently used to set "o2hb_debug_dir" to an error
pointer when debugfs was compiled out, but since debugfs was compiled
out, that error pointer was probably never actually *used*. So things
presumably worked.
Now, it hangs, according to Huang Ying. If so, that's clearly a
regression. That means that commit e2ac55b6a8e3 ("ocfs2: incorrect
check for debugfs returns") needs to be reverted or fixed.
Andrew?
Linus
7 years, 1 month
[x86/xsaves] 00abd21851c: Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000b00
by Huang Ying
FYI, we noticed the below changes on
https://github.com/fyu1/linux.git xsaves_fix_1
commit 00abd21851c30b12b8463a337c920cdc0aa7e2d1 ("x86/xsaves: Define and use user_xstate_size for xstate size in signal context")
+-----------------------------------------------------------+------------+------------+
| | 67966aacf6 | 00abd21851 |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 6 | 1 |
| boot_failures | 10 | 13 |
| IP-Config:Auto-configuration_of_network_failed | 10 | |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 0 | 13 |
| backtrace:do_group_exit | 0 | 13 |
| backtrace:SyS_exit_group | 0 | 13 |
+-----------------------------------------------------------+------------+------------+
[ 2.423172] Freeing unused kernel memory: 1012K (ffffffff8249f000 - ffffffff8259c000)
[ 2.429032] random: init urandom read with 3 bits of entropy available
[ 2.436423] init[1]: segfault at 0 ip (null) sp 00007fff1dcbc420 error 14 in init[55654d225000+26000]
[ 2.439575] Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000b00
[ 2.439575]
[ 2.440012] CPU: 1 PID: 1 Comm: init Not tainted 4.0.0-07590-gd04c8ff #2
[ 2.440012] ffff8800148bbec8 ffff8800148bbdf8 ffffffff817951f6 0000000000000048
[ 2.440012] ffffffff81a46bf8 ffff8800148bbe78 ffffffff81792b93 ffff8800148b4000
[ 2.440012] ffffffff00000010 ffff8800148bbe88 ffff8800148bbe28 ffff8800148bbe78
[ 2.440012] Call Trace:
[ 2.440012] [<ffffffff817951f6>] dump_stack+0x4c/0x65
[ 2.440012] [<ffffffff81792b93>] panic+0xc8/0x208
[ 2.440012] [<ffffffff810ce5c5>] do_exit+0xaf5/0xb40
[ 2.440012] [<ffffffff810ce69f>] do_group_exit+0x4f/0xe0
[ 2.440012] [<ffffffff810ce73f>] SyS_exit_group+0xf/0x10
[ 2.440012] [<ffffffff8179db6e>] system_call_fastpath+0x12/0x76
[ 2.440012] Kernel Offset: disabled
Elapsed time: 5
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month
[save] dcfb0967eb6: !!! IP-Config: Auto-configuration of network failed !!!
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/sashal/linux for-kirill
commit dcfb0967eb60b0745e94567192546730d05e8a49 ("save")
+------------------------------------------------------------------+------------+------------+
| | b66dfec322 | dcfb0967eb |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 2 | 12 |
| boot_failures | 20 | 10 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 20 | |
| backtrace:do_fork | 20 | |
| backtrace:kthreadd | 20 | |
| IP-Config:Auto-configuration_of_network_failed | 0 | 10 |
+------------------------------------------------------------------+------------+------------+
Your commit fix a OOM when running BTRFS selftest delalloc tests, but a new error message showed
in log as below.
[ 2.347121] BTRFS: selftest: Running find delalloc tests
[ 2.410829] BTRFS: selftest: Failed to allocate test page
[ 2.414750] stuck in a loop, start 0, end 268435455, nr_pages 7626, ret 0
[ 2.429878] ima: No TPM chip found, activating TPM-bypass!
Elapsed time: 10
qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap -kernel /kernel/i386-randconfig-nexs0-04162212/f3de79f90b3efb7b4dc9ad0441adef9281f367ac/vmlinuz-4.0.0-next-20150415-00064-gf3de79f -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-i386-35/rand_boot-1-quantal-core-i386.cgz-i386-randconfig-nexs0-04162212-f3de79f90b3efb7b4dc9ad0441adef9281f367ac-1-20150417-46680-cpfyqc.yaml ARCH=i386 kconfig=i386-randconfig-nexs0-04162212 branch=sashal/for-kirill commit=f3de79f90b3efb7b4dc9ad0441adef9281f367ac BOOT_IMAGE=/kernel/i386-randconfig-nexs0-04162212/f3de79f90b3efb7b4dc9ad0441adef9281f367ac/vmlinuz-4.0.0-next-20150415-00064-gf3de79f RESULT_ROOT=/result/vm-vp-quantal-i386/boot/1/quantal-core-i386.cgz/i386-randconfig-nexs0-04162212/f3de79f90b3efb7b4dc9ad0441adef9281f367ac/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 rd.udev.log-priority=err systemd.log_target=journal systemd.log_level=warning debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-i386-35::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-vp-quantal-i386-35 -m 360 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-i386-35 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-i386-35 -daemonize -display none -monitor null
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month
Re: [LKP] [ocfs2] e2ac55b6a8e: ocfs2_init:1612 ERROR: Unable to create ocfs2 debugfs root
by Huang Ying
On Sun, 2015-04-19 at 23:45 -0400, Chengyu Song wrote:
> Hi Ying,
>
> As suggested in the patch, -19 (-ENODEV) happens when debugfs is not configured (see include/linux/debugfs.h). So if debugfs is necessary for the functionality, in Kconfig, we should either declare it as a dependency, or auto select it.
I see. Thanks for explanation!
Best Regards,
Huang, Ying
> Chengyu
>
> > On Apr 19, 2015, at 11:25 PM, Huang Ying <ying.huang(a)intel.com> wrote:
> >
> > FYI, we noticed the below changes on
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> > commit e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6 ("ocfs2: incorrect check for debugfs returns")
> >
> > We found something as below in dmesg.
> >
> > [ 1.225136] VFS: Disk quotas dquot_6.5.2
> > [ 1.225814] VFS: Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
> > [ 1.228207] (swapper/0,1,0):ocfs2_init:1612 ERROR: Unable to create ocfs2 debugfs root.
> > [ 1.232445] (swapper/0,1,0):ocfs2_init:1634 ERROR: status = -19
> > [ 1.233223] ocfs2: Registered cluster interface o2cb
> > [ 1.233994] OCFS2 User DLM kernel interface loaded
> > [ 1.234618] (swapper/0,1,0):o2hb_debug_init:1318 ERROR: status = -19
> > [ 1.236183] gfs2: GFS2 installed
> > [ 1.238123] Key type asymmetric registered
> >
> > BUG: kernel boot hang
> > Elapsed time: 310
> > qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap -kernel /kernel/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/vmlinuz-4.0.0-02646-ge2ac55b -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-i386-23/rand_boot-1-quantal-core-i386.cgz-i386-randconfig-ib0-04172352-e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6-1-20150418-31554-1woqtdb.yaml ARCH=i386 kconfig=i386-randconfig-ib0-04172352 branch=linus/master commit=e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6 BOOT_IMAGE=/kernel/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/vmlinuz-4.0.0-02646-ge2ac55b RESULT_ROOT=/result/vm-vp-quantal-i386/boot/1/quantal-core-i386.cgz/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 rd.udev.log-priority=err systemd.log_target=journal systemd.log_level=warning debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-i386-23::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-vp-quantal-i386-23 -m 360 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-i386-23 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-i386-23 -daemonize -display none -monitor null
> >
> > Thanks,
> > Ying Huang
> >
> > <config-4.0.0-02646-ge2ac55b><.dmesg.txt>_______________________________________________
> > LKP mailing list
> > LKP(a)linux.intel.com
> >
>
7 years, 1 month
[ocfs2] e2ac55b6a8e: ocfs2_init:1612 ERROR: Unable to create ocfs2 debugfs root
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6 ("ocfs2: incorrect check for debugfs returns")
We found something as below in dmesg.
[ 1.225136] VFS: Disk quotas dquot_6.5.2
[ 1.225814] VFS: Dquot-cache hash table entries: 1024 (order 0, 4096 bytes)
[ 1.228207] (swapper/0,1,0):ocfs2_init:1612 ERROR: Unable to create ocfs2 debugfs root.
[ 1.232445] (swapper/0,1,0):ocfs2_init:1634 ERROR: status = -19
[ 1.233223] ocfs2: Registered cluster interface o2cb
[ 1.233994] OCFS2 User DLM kernel interface loaded
[ 1.234618] (swapper/0,1,0):o2hb_debug_init:1318 ERROR: status = -19
[ 1.236183] gfs2: GFS2 installed
[ 1.238123] Key type asymmetric registered
BUG: kernel boot hang
Elapsed time: 310
qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap -kernel /kernel/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/vmlinuz-4.0.0-02646-ge2ac55b -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-i386-23/rand_boot-1-quantal-core-i386.cgz-i386-randconfig-ib0-04172352-e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6-1-20150418-31554-1woqtdb.yaml ARCH=i386 kconfig=i386-randconfig-ib0-04172352 branch=linus/master commit=e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6 BOOT_IMAGE=/kernel/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/vmlinuz-4.0.0-02646-ge2ac55b RESULT_ROOT=/result/vm-vp-quantal-i386/boot/1/quantal-core-i386.cgz/i386-randconfig-ib0-04172352/e2ac55b6a8e337fac7cc59c6f452caac92ab5ee6/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 rd.udev.log-priority=err systemd.log_target=journal systemd.log_level=warning debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-i386-23::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-vp-quantal-i386-23 -m 360 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-i386-23 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-i386-23 -daemonize -display none -monitor null
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month
[f2fs] 465a05fecc2: +147.3% fsmark.files_per_sec
by Huang Ying
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 465a05fecc2c7921d50f33cd10621abc049cbabf ("f2fs: enable inline data by default")
testbox/testcase/testparams: lkp-st02/fsmark/1x-32t-1HDD-f2fs-9B-400M-fsyncBeforeClose-16d-256fpd
6a2de6eb3f267ba6 465a05fecc2c7921d50f33cd10
---------------- --------------------------
%stddev %change %stddev
\ | \
679 ± 1% +147.3% 1681 ± 1% fsmark.files_per_sec
15 ± 3% +177.4% 43 ± 0% fsmark.time.percent_of_cpu_this_job_got
150.90 ± 2% -59.5% 61.13 ± 1% fsmark.time.elapsed_time.max
150.90 ± 2% -59.5% 61.13 ± 1% fsmark.time.elapsed_time
1771558 ± 2% +6.2% 1881064 ± 0% fsmark.time.file_system_outputs
949761 ± 0% +22.2% 1160509 ± 0% fsmark.time.voluntary_context_switches
8240 ± 3% +80.1% 14840 ± 4% fsmark.time.involuntary_context_switches
184.27 ± 1% -48.5% 94.88 ± 0% uptime.boot
525 ± 2% -12.3% 460 ± 1% uptime.idle
89898 ± 0% -60.6% 35417 ± 0% softirqs.BLOCK
30856 ± 3% -19.7% 24774 ± 2% softirqs.RCU
34393 ± 0% -9.4% 31172 ± 0% softirqs.SCHED
5775 ± 1% +44.3% 8335 ± 0% vmstat.io.bo
18 ± 3% -47.2% 9 ± 5% vmstat.procs.b
11960 ± 1% +141.8% 28918 ± 0% vmstat.system.in
21443 ± 1% +152.8% 54201 ± 0% vmstat.system.cs
3093 ± 2% +9.5% 3387 ± 4% slabinfo.kmalloc-192.active_objs
3245 ± 3% +4.4% 3388 ± 4% slabinfo.kmalloc-192.num_objs
3627 ± 1% +16.7% 4233 ± 5% slabinfo.vm_area_struct.active_objs
3681 ± 1% +16.9% 4302 ± 5% slabinfo.vm_area_struct.num_objs
32653 ± 2% +10.9% 36223 ± 2% meminfo.Active(anon)
32637 ± 2% +10.4% 36030 ± 2% meminfo.AnonPages
1517 ± 0% -68.3% 481 ± 4% meminfo.Mlocked
3562 ± 1% +15.9% 4127 ± 0% meminfo.PageTables
1517 ± 0% -68.3% 481 ± 4% meminfo.Unevictable
150.90 ± 2% -59.5% 61.13 ± 1% time.elapsed_time.max
150.90 ± 2% -59.5% 61.13 ± 1% time.elapsed_time
8240 ± 3% +80.1% 14840 ± 4% time.involuntary_context_switches
15 ± 3% +177.4% 43 ± 0% time.percent_of_cpu_this_job_got
949761 ± 0% +22.2% 1160509 ± 0% time.voluntary_context_switches
8162 ± 2% +10.9% 9050 ± 2% proc-vmstat.nr_active_anon
8157 ± 2% +10.4% 9004 ± 2% proc-vmstat.nr_anon_pages
150 ± 6% -46.8% 80 ± 4% proc-vmstat.nr_dirty
379 ± 0% -68.4% 119 ± 4% proc-vmstat.nr_mlock
889 ± 1% +15.8% 1029 ± 0% proc-vmstat.nr_page_table_pages
379 ± 0% -68.4% 119 ± 4% proc-vmstat.nr_unevictable
220865 ± 2% -40.2% 131982 ± 1% proc-vmstat.nr_written
411249 ± 0% -23.1% 316279 ± 0% proc-vmstat.numa_hit
411249 ± 0% -23.1% 316279 ± 0% proc-vmstat.numa_local
26883 ± 1% +45.7% 39168 ± 2% proc-vmstat.pgactivate
169935 ± 1% -22.0% 132534 ± 1% proc-vmstat.pgalloc_dma32
264529 ± 1% -23.3% 202778 ± 1% proc-vmstat.pgalloc_normal
231722 ± 1% -51.0% 113542 ± 0% proc-vmstat.pgfault
193885 ± 2% -49.8% 97263 ± 3% proc-vmstat.pgfree
883675 ± 2% -40.2% 528402 ± 1% proc-vmstat.pgpgout
396 ± 6% +72.6% 683 ± 5% sched_debug.cfs_rq[0]:/.tg->runnable_avg
5121 ± 6% +106.0% 10549 ± 7% sched_debug.cfs_rq[0]:/.tg_load_avg
50 ± 12% +66.8% 84 ± 4% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
3813 ± 10% -18.1% 3122 ± 1% sched_debug.cfs_rq[0]:/.exec_clock
2357 ± 12% +65.8% 3907 ± 4% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
5066 ± 7% +107.6% 10516 ± 8% sched_debug.cfs_rq[1]:/.tg_load_avg
402 ± 6% +71.6% 690 ± 4% sched_debug.cfs_rq[1]:/.tg->runnable_avg
47 ± 2% +73.8% 83 ± 3% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
2234 ± 2% +71.6% 3832 ± 2% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
144 ± 15% -34.3% 94 ± 16% sched_debug.cfs_rq[2]:/.load
2213 ± 8% +94.6% 4307 ± 3% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
5026 ± 7% +105.0% 10306 ± 9% sched_debug.cfs_rq[2]:/.tg_load_avg
47 ± 9% +96.3% 93 ± 4% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
404 ± 6% +70.6% 690 ± 4% sched_debug.cfs_rq[2]:/.tg->runnable_avg
25 ± 10% -33.7% 16 ± 12% sched_debug.cfs_rq[2]:/.runnable_load_avg
591 ± 33% +176.1% 1632 ± 31% sched_debug.cfs_rq[3]:/.blocked_load_avg
620 ± 32% +170.6% 1678 ± 31% sched_debug.cfs_rq[3]:/.tg_load_contrib
5004 ± 7% +105.0% 10259 ± 9% sched_debug.cfs_rq[3]:/.tg_load_avg
408 ± 5% +69.8% 693 ± 4% sched_debug.cfs_rq[3]:/.tg->runnable_avg
13 ± 12% -25.0% 9 ± 15% sched_debug.cfs_rq[3]:/.nr_spread_over
2289 ± 4% +64.4% 3763 ± 4% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
48 ± 4% +67.5% 81 ± 4% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
60 ± 27% +50.8% 90 ± 10% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
412 ± 5% +68.9% 696 ± 4% sched_debug.cfs_rq[4]:/.tg->runnable_avg
2784 ± 26% +49.6% 4166 ± 10% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
4946 ± 8% +106.3% 10206 ± 8% sched_debug.cfs_rq[4]:/.tg_load_avg
55 ± 8% +55.9% 85 ± 7% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
4907 ± 8% +106.6% 10141 ± 9% sched_debug.cfs_rq[5]:/.tg_load_avg
415 ± 6% +68.1% 698 ± 4% sched_debug.cfs_rq[5]:/.tg->runnable_avg
2564 ± 8% +53.9% 3947 ± 7% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
191 ± 13% -33.9% 126 ± 31% sched_debug.cfs_rq[5]:/.load
2337 ± 6% +66.3% 3888 ± 8% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
419 ± 6% +67.2% 700 ± 4% sched_debug.cfs_rq[6]:/.tg->runnable_avg
4900 ± 8% +106.2% 10103 ± 9% sched_debug.cfs_rq[6]:/.tg_load_avg
50 ± 7% +69.5% 84 ± 8% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
4844 ± 7% +106.1% 9983 ± 9% sched_debug.cfs_rq[7]:/.tg_load_avg
2327 ± 7% +79.9% 4185 ± 15% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
50 ± 7% +81.5% 90 ± 16% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
423 ± 5% +66.7% 705 ± 4% sched_debug.cfs_rq[7]:/.tg->runnable_avg
295623 ± 17% -18.7% 240304 ± 6% sched_debug.cpu#0.nr_switches
107733 ± 24% -37.6% 67259 ± 10% sched_debug.cpu#0.ttwu_local
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#0.clock_task
135495 ± 18% -23.9% 103117 ± 7% sched_debug.cpu#0.sched_goidle
26384 ± 7% -20.4% 20995 ± 11% sched_debug.cpu#0.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#0.clock
296932 ± 17% -18.7% 241501 ± 6% sched_debug.cpu#0.sched_count
30451 ± 4% -26.1% 22493 ± 12% sched_debug.cpu#1.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#1.clock_task
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#1.clock
3 ± 25% +153.8% 8 ± 48% sched_debug.cpu#1.cpu_load[4]
715589 ± 5% -19.1% 578901 ± 11% sched_debug.cpu#1.avg_idle
142 ± 16% -33.3% 94 ± 16% sched_debug.cpu#2.load
5 ± 31% +114.3% 11 ± 15% sched_debug.cpu#2.cpu_load[3]
126380 ± 10% -18.8% 102610 ± 3% sched_debug.cpu#2.sched_goidle
418 ± 19% -41.2% 245 ± 12% sched_debug.cpu#2.curr->pid
277074 ± 9% -13.8% 238909 ± 3% sched_debug.cpu#2.sched_count
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#2.clock_task
277012 ± 9% -13.8% 238854 ± 3% sched_debug.cpu#2.nr_switches
9 ± 28% +89.5% 18 ± 27% sched_debug.cpu#2.cpu_load[2]
98388 ± 14% -32.3% 66641 ± 5% sched_debug.cpu#2.ttwu_local
740677 ± 2% -21.0% 584865 ± 9% sched_debug.cpu#2.avg_idle
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#2.clock
3 ± 31% +142.9% 8 ± 31% sched_debug.cpu#2.cpu_load[4]
25773 ± 9% -19.9% 20654 ± 8% sched_debug.cpu#2.nr_load_updates
690785 ± 3% -14.3% 591985 ± 6% sched_debug.cpu#3.avg_idle
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#3.clock_task
30028 ± 5% -30.5% 20877 ± 13% sched_debug.cpu#3.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#3.clock
113746 ± 18% -37.1% 71517 ± 18% sched_debug.cpu#4.ttwu_local
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#4.clock
27593 ± 7% -36.0% 17656 ± 4% sched_debug.cpu#4.nr_load_updates
106260 ± 1% -40.8% 62862 ± 0% sched_debug.cpu#4.clock_task
141639 ± 14% -24.8% 106524 ± 12% sched_debug.cpu#4.sched_goidle
184 ± 13% -31.3% 126 ± 31% sched_debug.cpu#5.load
28115 ± 9% -26.4% 20695 ± 15% sched_debug.cpu#5.nr_load_updates
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#5.clock_task
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#5.clock
710881 ± 5% -13.0% 618742 ± 11% sched_debug.cpu#5.avg_idle
151901 ± 11% -29.7% 106778 ± 9% sched_debug.cpu#6.sched_goidle
106261 ± 1% -40.8% 62861 ± 0% sched_debug.cpu#6.clock_task
106261 ± 1% -40.8% 62861 ± 0% sched_debug.cpu#6.clock
124231 ± 14% -42.0% 72021 ± 14% sched_debug.cpu#6.ttwu_local
27846 ± 4% -38.4% 17152 ± 3% sched_debug.cpu#6.nr_load_updates
175655 ± 9% -21.3% 138225 ± 7% sched_debug.cpu#6.ttwu_count
328538 ± 10% -25.1% 246230 ± 8% sched_debug.cpu#6.nr_switches
328606 ± 10% -25.1% 246282 ± 8% sched_debug.cpu#6.sched_count
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#7.clock
27346 ± 9% -26.6% 20061 ± 15% sched_debug.cpu#7.nr_load_updates
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu#7.clock_task
106261 ± 1% -40.8% 62863 ± 0% sched_debug.cpu_clk
106261 ± 1% -40.8% 62863 ± 0% sched_debug.ktime
107044 ± 1% -40.5% 63646 ± 0% sched_debug.sched_clk
testbox/testcase/testparams: nhm4/fsmark/performance-1x-32t-1HDD-f2fs-9B-400M-fsyncBeforeClose-16d-256fpd
6a2de6eb3f267ba6 465a05fecc2c7921d50f33cd10
---------------- --------------------------
9381393 ± 2% -52.3% 4475975 ± 15% fsmark.app_overhead
559 ± 0% +134.7% 1313 ± 0% fsmark.files_per_sec
12 ± 0% +214.6% 37 ± 1% fsmark.time.percent_of_cpu_this_job_got
183.94 ± 0% -57.3% 78.57 ± 0% fsmark.time.elapsed_time.max
183.94 ± 0% -57.3% 78.57 ± 0% fsmark.time.elapsed_time
1839298 ± 0% +4.3% 1918608 ± 0% fsmark.time.file_system_outputs
22.50 ± 0% +30.0% 29.25 ± 0% fsmark.time.system_time
761103 ± 0% +37.7% 1048153 ± 0% fsmark.time.voluntary_context_switches
19670 ± 0% -30.8% 13615 ± 3% fsmark.time.involuntary_context_switches
206.26 ± 1% -51.7% 99.57 ± 0% uptime.boot
367 ± 5% +42.1% 522 ± 1% uptime.idle
47369 ± 0% -65.4% 16377 ± 0% softirqs.BLOCK
29294 ± 2% -13.9% 25233 ± 2% softirqs.RCU
2.29 ± 0% +151.2% 5.76 ± 0% turbostat.%Busy
68 ± 0% +174.0% 187 ± 0% turbostat.Avg_MHz
40.65 ± 1% +82.9% 74.36 ± 0% turbostat.CPU%c1
48.36 ± 0% -76.3% 11.44 ± 4% turbostat.CPU%c3
4943 ± 0% +37.1% 6778 ± 0% vmstat.io.bo
14 ± 2% -33.9% 9 ± 11% vmstat.procs.b
5720 ± 0% +112.1% 12131 ± 0% vmstat.system.in
18895 ± 0% +135.5% 44500 ± 0% vmstat.system.cs
58365 ± 2% -8.8% 53258 ± 0% meminfo.DirectMap4k
1541 ± 0% -60.0% 616 ± 0% meminfo.Mlocked
1541 ± 0% -60.0% 616 ± 0% meminfo.Unevictable
183.94 ± 0% -57.3% 78.57 ± 0% time.elapsed_time.max
183.94 ± 0% -57.3% 78.57 ± 0% time.elapsed_time
19670 ± 0% -30.8% 13615 ± 3% time.involuntary_context_switches
12 ± 0% +214.6% 37 ± 1% time.percent_of_cpu_this_job_got
22.50 ± 0% +30.0% 29.25 ± 0% time.system_time
1.13 ± 2% -28.6% 0.80 ± 3% time.user_time
761103 ± 0% +37.7% 1048153 ± 0% time.voluntary_context_switches
982895 ± 0% -28.3% 704285 ± 0% cpuidle.C1-NHM.usage
3.274e+08 ± 1% -14.2% 2.81e+08 ± 1% cpuidle.C1-NHM.time
180656 ± 3% +254.8% 641058 ± 0% cpuidle.C1E-NHM.usage
42872379 ± 2% +96.4% 84210125 ± 0% cpuidle.C1E-NHM.time
373512 ± 0% -60.1% 149080 ± 1% cpuidle.C3-NHM.usage
7.652e+08 ± 0% -85.6% 1.099e+08 ± 2% cpuidle.C3-NHM.time
3.122e+08 ± 3% -59.8% 1.254e+08 ± 1% cpuidle.C6-NHM.time
134366 ± 2% -25.7% 99891 ± 1% cpuidle.C6-NHM.usage
2400 ± 1% +80.8% 4339 ± 1% cpuidle.POLL.usage
143 ± 2% -48.3% 74 ± 6% proc-vmstat.nr_dirty
385 ± 0% -60.0% 154 ± 0% proc-vmstat.nr_mlock
385 ± 0% -60.0% 154 ± 0% proc-vmstat.nr_unevictable
230004 ± 0% -40.3% 137234 ± 0% proc-vmstat.nr_written
436867 ± 0% -25.6% 325170 ± 0% proc-vmstat.numa_hit
436867 ± 0% -25.6% 325170 ± 0% proc-vmstat.numa_local
24976 ± 0% +79.6% 44865 ± 0% proc-vmstat.pgactivate
467392 ± 0% -24.9% 350901 ± 0% proc-vmstat.pgalloc_dma32
268016 ± 0% -51.5% 129939 ± 0% proc-vmstat.pgfault
226937 ± 1% -51.6% 109882 ± 1% proc-vmstat.pgfree
919916 ± 0% -40.3% 548911 ± 0% proc-vmstat.pgpgout
283 ± 3% +131.3% 655 ± 5% sched_debug.cfs_rq[0]:/.tg->runnable_avg
7165 ± 5% +85.5% 13292 ± 2% sched_debug.cfs_rq[0]:/.tg_load_avg
6701 ± 7% +11.4% 7465 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
38 ± 6% +133.6% 88 ± 5% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
1783 ± 6% +129.9% 4100 ± 5% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
7165 ± 5% +85.2% 13270 ± 2% sched_debug.cfs_rq[1]:/.tg_load_avg
285 ± 3% +130.6% 658 ± 5% sched_debug.cfs_rq[1]:/.tg->runnable_avg
41 ± 24% +103.0% 83 ± 5% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
4678 ± 4% +19.4% 5586 ± 5% sched_debug.cfs_rq[1]:/.min_vruntime
1920 ± 24% +100.2% 3843 ± 5% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
4979 ± 7% +14.6% 5704 ± 6% sched_debug.cfs_rq[2]:/.min_vruntime
1682 ± 5% +127.1% 3820 ± 6% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
7133 ± 5% +86.0% 13267 ± 2% sched_debug.cfs_rq[2]:/.tg_load_avg
36 ± 6% +129.9% 82 ± 6% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
287 ± 3% +130.2% 662 ± 5% sched_debug.cfs_rq[2]:/.tg->runnable_avg
7125 ± 5% +82.1% 12973 ± 2% sched_debug.cfs_rq[3]:/.tg_load_avg
290 ± 4% +128.4% 663 ± 5% sched_debug.cfs_rq[3]:/.tg->runnable_avg
1624 ± 4% +134.6% 3810 ± 8% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
34 ± 5% +140.9% 82 ± 9% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
32 ± 10% +149.2% 81 ± 6% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
293 ± 4% +127.2% 665 ± 5% sched_debug.cfs_rq[4]:/.tg->runnable_avg
1172 ± 29% +137.4% 2782 ± 18% sched_debug.cfs_rq[4]:/.blocked_load_avg
3666 ± 4% +20.7% 4426 ± 6% sched_debug.cfs_rq[4]:/.min_vruntime
1214 ± 31% +138.4% 2893 ± 17% sched_debug.cfs_rq[4]:/.tg_load_contrib
1535 ± 10% +143.7% 3741 ± 6% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
7125 ± 5% +82.3% 12986 ± 2% sched_debug.cfs_rq[4]:/.tg_load_avg
34 ± 17% +133.8% 79 ± 15% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
7121 ± 5% +82.8% 13020 ± 2% sched_debug.cfs_rq[5]:/.tg_load_avg
294 ± 4% +127.2% 669 ± 5% sched_debug.cfs_rq[5]:/.tg->runnable_avg
1034 ± 32% +123.5% 2312 ± 23% sched_debug.cfs_rq[5]:/.tg_load_contrib
3739 ± 2% +28.2% 4792 ± 12% sched_debug.cfs_rq[5]:/.min_vruntime
1589 ± 16% +131.5% 3679 ± 15% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
1001 ± 32% +128.8% 2292 ± 23% sched_debug.cfs_rq[5]:/.blocked_load_avg
3851 ± 1% +19.0% 4583 ± 5% sched_debug.cfs_rq[6]:/.min_vruntime
1882 ± 5% +94.7% 3666 ± 5% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
297 ± 4% +126.0% 671 ± 5% sched_debug.cfs_rq[6]:/.tg->runnable_avg
7120 ± 5% +82.7% 13005 ± 2% sched_debug.cfs_rq[6]:/.tg_load_avg
40 ± 5% +95.1% 79 ± 5% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
3826 ± 2% +17.5% 4495 ± 6% sched_debug.cfs_rq[7]:/.min_vruntime
7078 ± 5% +83.5% 12987 ± 1% sched_debug.cfs_rq[7]:/.tg_load_avg
1517 ± 9% +155.4% 3875 ± 14% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
32 ± 9% +162.5% 84 ± 15% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
299 ± 3% +125.0% 673 ± 5% sched_debug.cfs_rq[7]:/.tg->runnable_avg
331525 ± 3% -34.0% 218740 ± 5% sched_debug.cpu#0.nr_switches
142801 ± 4% -48.4% 73667 ± 5% sched_debug.cpu#0.ttwu_local
678550 ± 7% -32.7% 456720 ± 34% sched_debug.cpu#0.avg_idle
313488 ± 1% -31.1% 215938 ± 4% sched_debug.cpu#0.ttwu_count
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#0.clock_task
155708 ± 3% -38.2% 96199 ± 5% sched_debug.cpu#0.sched_goidle
28188 ± 1% -33.4% 18783 ± 3% sched_debug.cpu#0.nr_load_updates
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#0.clock
331695 ± 3% -34.0% 218808 ± 5% sched_debug.cpu#0.sched_count
446397 ± 5% -18.7% 362850 ± 3% sched_debug.cpu#1.sched_count
208484 ± 6% -13.1% 181078 ± 3% sched_debug.cpu#1.ttwu_count
29358 ± 2% -32.6% 19790 ± 1% sched_debug.cpu#1.nr_load_updates
111537 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#1.clock_task
111537 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#1.clock
446310 ± 5% -18.7% 362793 ± 3% sched_debug.cpu#1.nr_switches
214342 ± 6% -22.4% 166319 ± 3% sched_debug.cpu#1.sched_goidle
166966 ± 8% -26.5% 122663 ± 4% sched_debug.cpu#1.ttwu_local
707893 ± 7% -31.1% 487966 ± 9% sched_debug.cpu#1.avg_idle
198095 ± 3% -11.9% 174593 ± 9% sched_debug.cpu#2.sched_goidle
111536 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#2.clock_task
151594 ± 4% -12.4% 132847 ± 11% sched_debug.cpu#2.ttwu_local
711629 ± 8% -48.7% 364936 ± 30% sched_debug.cpu#2.avg_idle
111536 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#2.clock
28629 ± 4% -32.1% 19433 ± 7% sched_debug.cpu#2.nr_load_updates
164980 ± 11% -32.8% 110931 ± 7% sched_debug.cpu#3.ttwu_local
441918 ± 8% -24.3% 334685 ± 6% sched_debug.cpu#3.sched_count
441832 ± 8% -24.3% 334627 ± 6% sched_debug.cpu#3.nr_switches
718484 ± 5% -34.8% 468267 ± 21% sched_debug.cpu#3.avg_idle
207428 ± 9% -18.2% 169613 ± 6% sched_debug.cpu#3.ttwu_count
111536 ± 2% -49.4% 56462 ± 3% sched_debug.cpu#3.clock_task
29300 ± 4% -33.8% 19394 ± 7% sched_debug.cpu#3.nr_load_updates
111536 ± 2% -49.4% 56462 ± 3% sched_debug.cpu#3.clock
212194 ± 9% -28.0% 152779 ± 6% sched_debug.cpu#3.sched_goidle
2015 ± 1% -36.0% 1290 ± 1% sched_debug.cpu#4.nr_uninterruptible
68219 ± 3% +16.4% 79412 ± 3% sched_debug.cpu#4.ttwu_count
111535 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#4.clock
15884 ± 0% -35.3% 10276 ± 5% sched_debug.cpu#4.nr_load_updates
111535 ± 2% -49.4% 56461 ± 3% sched_debug.cpu#4.clock_task
101874 ± 19% -26.3% 75129 ± 7% sched_debug.cpu#5.sched_goidle
67242 ± 29% -39.0% 41027 ± 11% sched_debug.cpu#5.ttwu_local
16499 ± 7% -36.4% 10496 ± 5% sched_debug.cpu#5.nr_load_updates
218670 ± 17% -19.5% 175923 ± 6% sched_debug.cpu#5.nr_switches
3 ± 25% +123.1% 7 ± 46% sched_debug.cpu#5.cpu_load[4]
111534 ± 2% -49.4% 56454 ± 3% sched_debug.cpu#5.clock_task
111534 ± 2% -49.4% 56454 ± 3% sched_debug.cpu#5.clock
748334 ± 5% -24.6% 563910 ± 15% sched_debug.cpu#5.avg_idle
218737 ± 17% -19.6% 175958 ± 6% sched_debug.cpu#5.sched_count
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#6.clock_task
111534 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#6.clock
15913 ± 1% -31.5% 10903 ± 10% sched_debug.cpu#6.nr_load_updates
697624 ± 9% -19.9% 559027 ± 3% sched_debug.cpu#6.avg_idle
199869 ± 3% -17.1% 165770 ± 8% sched_debug.cpu#7.sched_count
199797 ± 3% -17.0% 165733 ± 8% sched_debug.cpu#7.nr_switches
111533 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#7.clock
16124 ± 2% -36.8% 10191 ± 7% sched_debug.cpu#7.nr_load_updates
57841 ± 6% -37.1% 36364 ± 13% sched_debug.cpu#7.ttwu_local
92125 ± 3% -24.0% 70057 ± 9% sched_debug.cpu#7.sched_goidle
111533 ± 2% -49.4% 56460 ± 3% sched_debug.cpu#7.clock_task
111537 ± 2% -49.4% 56462 ± 3% sched_debug.cpu_clk
111387 ± 2% -49.4% 56313 ± 3% sched_debug.ktime
111537 ± 2% -49.4% 56462 ± 3% sched_debug.sched_clk
lkp-st02: Core2
Memory: 8G
nhm4: Nehalem
Memory: 4G
vmstat.system.cs
60000 ++------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O
50000 O+ O O O |
| |
| |
40000 ++ |
| .*. |
30000 ++.*.. *.. .*. *.. |
*. * : *. |
20000 ++ : : *..*..*..*..*.*..*..* |
| : : |
| : : |
10000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
softirqs.BLOCK
120000 ++-----------------------------------------------------------------+
| |
100000 *+.*..* *..*.*..*..*.. |
| : : .*..*.. .*. .* |
| : : *..* *. *. |
80000 ++ : : |
| : : |
60000 ++ : : |
| : : |
40000 ++ : : |
O O:O O : O O O O O O O O O O O O O O O O O O
| : : |
20000 ++ : : |
| : : |
0 ++-O----*--*--O-----O----------------------------------------------+
fsmark.files_per_sec
1800 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
1600 ++ |
1400 ++ |
| |
1200 ++ |
1000 ++ .*.. |
*..*..* *..*..*. *.. |
800 ++ : : |
600 ++ : : *..*.*..*..*..*..*..* |
| : : |
400 ++ : : |
200 ++ : : |
| : : |
0 ++-O-----*--*-O-----O------------------------------------------------+
fsmark.time.percent_of_cpu_this_job_got
45 ++---------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
40 ++ |
35 ++ |
| |
30 ++ |
25 ++ |
| *.. .*.. |
20 *+.*..* : *..*. *.. |
15 ++ : : *..*.*..*..*..*..*..* |
| : : |
10 ++ : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
fsmark.time.voluntary_context_switches
1.2e+06 ++--------------------O-------O--------O-----------------------O--O
O O O O O O O O O O O O O O O O O |
1e+06 ++ |
*..*..* *.*..*..*..*.*..*..*..*.*..*..*..* |
| : : |
800000 ++ : : |
| : : |
600000 ++ : : |
| : : |
400000 ++ : : |
| : : |
| : : |
200000 ++ : : |
| : : |
0 ++-O----*--*--O----O----------------------------------------------+
fsmark.time.involuntary_context_switches
16000 ++------------------------------------------------------------------O
O O O O O O O O O O O O O O |
14000 ++ O O O O O O O |
12000 ++ |
| |
10000 ++ *.. |
*..*..* : *..*..*.*.. .*.. |
8000 ++ : : *..*..*..*..*.*. * |
| : : |
6000 ++ : : |
4000 ++ : : |
| : : |
2000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
time.system_time
30 ++---------------------------------------------------------------------+
| |
25 O+ O O O O O O O O O O O O O O O O O O O O O
| .*..*.*..*..*..*..*..* |
*..*..* *..*..*..*..*. |
20 ++ : : |
| : : |
15 ++ : : |
| : : |
10 ++ : : |
| : : |
| : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
time.percent_of_cpu_this_job_got
45 ++---------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O
40 ++ |
35 ++ |
| |
30 ++ |
25 ++ |
| *.. .*.. |
20 *+.*..* : *..*. *.. |
15 ++ : : *..*.*..*..*..*..*..* |
| : : |
10 ++ : : |
5 ++ : : |
| : : |
0 ++-O-----*--*--O-----O-------------------------------------------------+
time.voluntary_context_switches
1.2e+06 ++--------------------O-------O--------O-----------------------O--O
O O O O O O O O O O O O O O O O O |
1e+06 ++ |
*..*..* *.*..*..*..*.*..*..*..*.*..*..*..* |
| : : |
800000 ++ : : |
| : : |
600000 ++ : : |
| : : |
400000 ++ : : |
| : : |
| : : |
200000 ++ : : |
| : : |
0 ++-O----*--*--O----O----------------------------------------------+
time.involuntary_context_switches
16000 ++------------------------------------------------------------------O
O O O O O O O O O O O O O O |
14000 ++ O O O O O O O |
12000 ++ |
| |
10000 ++ *.. |
*..*..* : *..*..*.*.. .*.. |
8000 ++ : : *..*..*..*..*.*. * |
| : : |
6000 ++ : : |
4000 ++ : : |
| : : |
2000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
proc-vmstat.nr_dirty
180 ++--------------------------------------------------------------------+
| *.. *.. |
160 ++. .*..*.. .*..*..*.. .. .*..* |
140 *+ * *.*. *. * *. |
| : : |
120 ++ : : |
100 ++ : : O O O |
| O: O :O O O O O O |
80 ++ : : O O O O O O O
60 O+ :O : O O |
| : : |
40 ++ : : |
20 ++ : : |
| : : |
0 ++-O-----*--*--O----O-------------------------------------------------+
proc-vmstat.nr_written
250000 ++-----------------------------------------------------------------+
| .* .*.. .*. |
*..*. : *..*.*..*..*..*..*.*. *. *..* |
200000 ++ : : |
| : : |
| : : |
150000 ++ : :O |
O O:O O : O O O O O O O O O O O O O O O O O
100000 ++ : : |
| : : |
| : : |
50000 ++ : : |
| : : |
| : : |
0 ++-O----*--*--O-----O----------------------------------------------+
proc-vmstat.pgpgout
1e+06 ++-----------------------------------------------------------------+
900000 ++ .* .*.. .*. |
*..*. : *..*.*..*..*..*..*.*. *. *..* |
800000 ++ : : |
700000 ++ : : |
| : : |
600000 ++ : :O |
500000 O+ O:O O : O O O O O O O O O O O O O O O O O
400000 ++ : : |
| : : |
300000 ++ : : |
200000 ++ : : |
| : : |
100000 ++ : : |
0 ++-O----*--*--O-----O----------------------------------------------+
proc-vmstat.pgactivate
45000 ++------------------------------------------------------------------+
| |
40000 O+ O O O O O O O O O O O O O O O O O O O
35000 ++ O O |
| |
30000 *+.*..* *..*..*..*.*.. .*.. .*.. |
25000 ++ : : *. *..*..*.*. * |
| : : |
20000 ++ : : |
15000 ++ : : |
| : : |
10000 ++ : : |
5000 ++ : : |
| : : |
0 ++-O-----*-*--O-----O-----------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 1 month