[LKP] [mm] ac5b2c1891: vm-scalability.throughput -61.3% regression
torvalds at linux-foundation.org
Wed Dec 5 16:58:02 PST 2018
On Wed, Dec 5, 2018 at 3:51 PM Linus Torvalds
<torvalds at linux-foundation.org> wrote:
> Ok, I've applied David's latest patch.
> I'm not at all objecting to tweaking this further, I just didn't want
> to have this regression stand.
Hmm. Can somebody (David?) also perhaps try to state what the
different latency impacts end up being? I suspect it's been mentioned
several times during the argument, but it would be nice to have a
"going forward, this is what I care about" kind of setup for good
How much of the problem ends up being about the cost of compaction vs
the cost of getting a remote node bigpage?
That would seem to be a fairly major issue, but __GFP_THISNODE affects
both. It limits compaction to just this now, in addition to obviously
limiting the allocation result.
I realize that we probably do want to just have explicit policies that
do not exist right now, but what are (a) sane defaults, and (b) sane
For example, if we cannot get a hugepage on this node, but we *do* get
a node-local small page, is the local memory advantage simply better
than the possible TLB advantage?
Because if that's the case (at least commonly), then that in itself is
a fairly good argument for "hugepage allocations should always be
But David also did mention the actual allocation overhead itself in
the commit, and maybe the math is more "try to get a local hugepage,
but if no such thing exists, see if you can get a remote hugepage
So another model can be "do local-only compaction, but allow non-local
allocation if the local node doesn't have anything". IOW, if other
nodes have hugepages available, pick them up, but don't try to compact
other nodes to do so?
And yet another model might be "do a least-effort thing, give me a
local hugepage if it exists, otherwise fall back to small pages".
So there are different combinations of "try compaction" vs "local-remote".
More information about the LKP