Hyperscan doesn't go exponential in the size of the searched string at any time. It also doesn't handle things that require backtracking (e.g. backreferences).

This question arises from a common confusion. Irritatingly, various people (Friedl springs to mind) have willfully ignored the theory and redefined NFA to mean "something used to match regular expressions in some sort of way, not necessarily finite nor an automaton". NFA algorithms, as defined in the theory and as used in Hyperscan, are linear in the size of the searched string (although they are in practice constant-time more expensive than the DFA implementations used within Hyperscan).

So if you use Hyperscan as-is, you will find that the features that could require exponential behavior from backtracking (actually, I don't think matching back-references is truly exponential for a fixed number of back-references, but that's another argument) are already disallowed and everything should be fine.

Regards,
Geoff Langdale.

On Thu, Nov 8, 2018 at 10:09 AM Bobby Martin <bobbymartin2@gmail.com> wrote:
Is there a way to configure Hyperscan to use only DFA state machines, or otherwise restrict the time complexity of matches to be polynomial at worst?

We need to ensure the algorithmic complexity of our matches doesn't go exponential in the size of the searched string, as NFA can for degenerate cases

We understand that this is incompatible with some regular expression features such as backreferences.

Thanks!
Bobby