This is just a heads-up: for some reason the acpi layer and nvdimm use
the IS_ERR_VALUE() macro, and they use it incorrectly.
To see warnings about it, change the macro from
#define IS_ERR_VALUE(x) unlikely((x) >= (unsigned long)-MAX_ERRNO)
to do a cast to a pointer and back (ie make the "(x)" part be
"(unsigned long)(void *)(x)" instead, which then will cause warnings
warning: cast to pointer from integer of different size
when passed an "int" argument.
The reason "int" arguments are wrong is that the macro really is
designed to test the upper range of a pointer value. It happens to
work for signed integers too, but looking at the users, pretty much
none of them are right. The ACPI and nvdimm users are all about the
perfectly standard "zero for success, negative error code for
failure", and so using
is just plain garbage. The code generally should just do
which is simpler, smaller, and generates better code.
This bug seems to have been so common in the power management code
that we even have a coccinelle script for it. But for some reason
several uses remain in acpi_debug.c and now there are cases in
There are random various crap cases like that elsewhere too, but acpi
and nvdimm were just more dense with this bug than most other places.
The bug isn't fatal - as mentioned, IS_ERROR_VALUE() _does_ actually
work on the range of integers that are normal errors, but it's
pointless and actively misleading, and it's not meant for that use. So
it just adds complexity and worse code generation for no actual gain.
I noticed this when I was looking at similar idiocy in fs/gfs2, where
the code in question caused warnings (for other reasons, but the main
reason was "code too complex for gcc to understand it")