Description
I just analyzed a project where a file with tons of use
statements easily slows down PHPCS in a quadratic way.
Specifically, the SlevomatCodingStandard\Helpers\TokenHelper#findPrevious()
(relatively lightweight) is used in:
UseFromSameNamespaceSniff
MultipleUsesPerLineSniff
UseDoesNotStartWithBackspaceSniff
Along other code locations, like the SniffLocalCache
and UseStatementHelper
.
IT seems like PHP_CodeSniffer\Files\File#findPrevious()
is extremely slow and complex, and iterates over the entire tokens of a file starting from the end: this obviously is problematic when the source code increases in size, and we're analyzing use
statements.
I'm wondering whether we should try and optimize the TokenHelper
in this project, by minimuizing #findPrevious()
calls, or in PHP_CodeSniffer itself, by optimizing the main loop there 🤔
/cc @MatteoBiagini
Here's a rough profile screenshot, to make this a bit more visible: