The email seems to have gone through J.
I think the code is in the GIT repository (well maybe not this latest VTune
version).
These are very interesting results. On a side note, please dont take my
lack of reaction to all these results (Nick, you have been looking at a lot
of things) as lack of interest; things are just getting pushed into a queue,
they will get popped at some point J.
Thanks,
Romain
From: ocr-dev-bounces(a)lists.01.org [mailto:ocr-dev-bounces@lists.01.org] On
Behalf Of Vincent Cavé
Sent: Thursday, January 31, 2013 12:01 PM
To: Technical discussion about OCR
Subject: [OCR-dev] Fwd: Execution time breakdowns
Hi,
I'm forwarding Nicholas' email since I don't remember the admin password to
approve it myself.
One concern we have is that although the timing gets bad, most of the time
is spent trying to steal which would indicate there's not enough work. Could
it be a granularity problem ?
Another thing is that every run seems to break around 16 cores/workers,
we're wondering if there could be something hard-coded in ocrInit or
somewhere else.
Nicholas, can you send us the code you're using for both pthread and ocr so
that we can have a look ?
Best,
Vincent
Begin forwarded message:
From: ocr-dev-owner(a)lists.01.org
Subject: OCR-dev post from nicholas.p.carter(a)intel.com requires approval
Date: January 29, 2013 9:45:27 AM CST
To: ocr-dev-owner(a)lists.01.org
As list administrator, your authorization is requested for the
following mailing list posting:
List: OCR-dev(a)lists.01.org
From: nicholas.p.carter(a)intel.com
Subject: Execution time breakdowns
Reason: Message body is too big: 1549440 bytes with a limit of 1024 KB
At your convenience, visit:
https://lists.01.org/mailman/admindb/ocr-dev
to approve or deny the request.
From: "Carter, Nicholas P" <nicholas.p.carter(a)intel.com>
Subject: Execution time breakdowns
Date: January 29, 2013 4:03:52 PM CST
To: "ocr-dev(a)lists.01.org" <ocr-dev(a)lists.01.org>
Benoits question about execution time breakdowns got me thinking about how
to script Vtune to generate the sort of data he was looking for, and it
turned out to not be too hard. (Meaning that it took a while to figure out
but is pretty easy once you know how.)
I wrote some scripts to sweep over the different array and chunk sizes,
generating execution times by function, and other scripts to process the
data and plot the fraction of execution time spent in each of the 10
functions that were the biggest contributors to execution time across the
sweep. Hopefully, theyll provide some data about where to look when the
time comes for performance tuning. Also, these scripts should be pretty
easily portable to other programs.
-Nick