On 10/30/20 10:58 PM, Matthew Wilcox wrote:
On Fri, Oct 30, 2020 at 10:02:45PM +0800, Chen, Rong A wrote:
> On 10/30/2020 9:17 PM, Matthew Wilcox wrote:
>> On Fri, Oct 30, 2020 at 03:17:15PM +0800, kernel test robot wrote:
>>> Details are as below:
>>>
-------------------------------------------------------------------------------------------------->
>>>
>>>
>>> To reproduce:
>>>
>>> git clone
https://github.com/intel/lkp-tests.git
>>> cd lkp-tests
>>> bin/lkp install job.yaml # job file is attached in this email
>>> bin/lkp run job.yaml
>> Do you actually test these instructions before you send them out?
>>
>> hdd_partitions:
"/dev/disk/by-id/ata-WDC_WD2500BEKT-00PVMT0_WD-WX11A23L4840-part
>> 1"
>> ssd_partitions: "/dev/nvme1n1p1 /dev/nvme0n1p1"
>> rootfs_partition:
"/dev/disk/by-id/ata-INTEL_SSDSC2CW240A3_CVCV204303WP240CGN-part1"
>>
>> That's _very_ specific to a given machine. I'm not familiar with
>> this test, so I don't know what I need to change.
>
> Hi Matthew,
>
> Sorry about that, I copied the job.yaml file from the server,
> the right way to do is to set your disk partitions in the yaml,
> please see
https://github.com/intel/lkp-tests#run-your-own-disk-partitions.
>
> there is another reproduce script attached in the original mail
> for your reference.
Can you reproduce this? Here's my results:
# stress-ng "--timeout" "100" "--times"
"--verify" "--metrics-brief" "--sequential" "96"
"--class" "memory" "--minimize" "--exclude"
"spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib"
stress-ng: info: [7670] disabled 'oom-pipe' as it may hang or reboot the machine
(enable it with the --pathological option)
stress-ng: info: [7670] dispatching hogs: 96 tmpfs
stress-ng: info: [7670] successful run completed in 100.23s (1 min, 40.23 secs)
stress-ng: info: [7670] stressor bogo ops real time usr time sys time bogo
ops/s bogo ops/s
stress-ng: info: [7670] (secs) (secs) (secs) (real
time) (usr+sys time)
stress-ng: info: [7670] tmpfs 8216 100.10 368.02 230.89
82.08 13.72
stress-ng: info: [7670] for a 100.23s run time:
stress-ng: info: [7670] 601.38s available CPU time
stress-ng: info: [7670] 368.71s user time ( 61.31%)
stress-ng: info: [7670] 231.55s system time ( 38.50%)
stress-ng: info: [7670] 600.26s total time ( 99.81%)
stress-ng: info: [7670] load average: 78.32 27.87 10.10
Hi Matthew,
IIUC, yes, we can reproduce it, here is the result from the server:
$ stress-ng --timeout 100 --times --verify --metrics-brief --sequential
96 --class memory --minimize --exclude
spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib
stress-ng: info: [2765] disabled 'oom-pipe' as it may hang or reboot
the machine (enable it with the --pathological option)
stress-ng: info: [2765] dispatching hogs: 96 tmpfs
stress-ng: info: [2765] successful run completed in 104.67s (1 min,
44.67 secs)
stress-ng: info: [2765] stressor bogo ops real time usr time
sys time bogo ops/s bogo ops/s
stress-ng: info: [2765] (secs) (secs)
(secs) (real time) (usr+sys time)
stress-ng: info: [2765] tmpfs 363 103.02 622.07
7289.85 3.52 0.05
stress-ng: info: [2765] for a 104.67s run time:
stress-ng: info: [2765] 10047.98s available CPU time
stress-ng: info: [2765] 622.46s user time ( 6.19%)
stress-ng: info: [2765] 7290.66s system time ( 72.56%)
stress-ng: info: [2765] 7913.12s total time ( 78.75%)
stress-ng: info: [2765] load average: 79.62 28.89 10.45
we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit
and parent commit.
Best Regards,
Rong Chen