I can't answer your question about the LRU but I don't believe Lustre has readahead/prefetching support for backwards reads so you're probably doing 1 page per RPC which will be fairly slow.  You can look at your RPC stats to see what size RPCs are being issued on the client:
lctl get_param osc.*.rpc_stats

To reset them before a test:
lctl set_param osc.*.rpc_stats=0

Jeremy

On Mon, Jan 19, 2015 at 8:59 AM, John Bauer <bauerj@iodoctors.com> wrote:
Andreas

Thank you for the reply.  This investigation started with the observation of slow backwards reads of file by an MSC.Nastran run doing
a Lanczos eigenvalue solve ( see image below ).  I point that out so it is known that I am not investigating an academic run of iozone. 
It is far simpiler to work with iozone than MSC.Nastran.

If you care to read a bit more to see the observed behavior of Lustre, please read on.

The following image depicts the access of the file over time, by the iozone run.  What is quite odd is that when the second backward read of the file begins,
the reading of the file is at its fastest(steep slope) during this backwards read.  This is at at time when all of the end of the file should have been LRU'd out of the system buffers by the previous backwards read.  The rate then slows down through the meat of the file and then starts getting faster again toward the end of the second backwards read.

I have run this job many times and the behavior, as depicted in the first image, is always the same.  The slopes vary some, but there is always this
serpentine look to it.  It is not the same OST's every time.  If I run this with iozone using 256K requests, the slopes for the backwards reads gets much lower.
To me, it seems at though something is wrong with the LRU mechanism.  Note in the last image, when iozone is using 256k requests, that this behavior starts during the
forward reads of the file.  It is not just a backward read phenomenon.  It happens every time when reading backwards.  Only occasionally during the forward reads.

John