-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
avoiding top level await circular #5843
New issue
8000
summary>
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and
privacy statement. We’ll occasionally send you account related emails.
Already on GitHub?
Sign in
to your account
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Thank you for your contribution! ❤️You can try out this pull request locally by installing Rollup via npm install rollup/rollup#fix/top-level-await-circular Notice: Ensure you have installed the latest stable Rust toolchain. If you haven't installed it yet, please see https://www.rust-lang.org/tools/install to learn how to download Rustup and install Rust. or load it into the REPL: |
Performance report
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #5843 +/- ##
=======================================
Coverage 98.54% 98.55%
=======================================
Files 269 269
Lines 8550 8571 +21
Branches 1467 1469 +2
=======================================
+ Hits 8426 8447 +21
Misses 92 92
Partials 32 32 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
d6469c4
to
6f7da39
Compare
6f7da39
to
0f7ee98
Compare
0f7ee98
to
c839982
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! Nice work, I am still in the process of reviewing as I am trying to understand everything, and also if there could be performance issues. Some small comments so far.
c839982
to
d23a21e
Compare
Thank you so much for your review and the advice on variable names! I've updated them. 😊 |
d23a21e
to
78ed455
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi! So the reason I am slightly hesitant to merge this PR is that I am worried about performance. We are already in a loop over each chunkAtom where we then loop over each dependent entry.
Now inside the loop, we are adding a loop over the iterated entries (which is on average half as the number of dependent entries), and for each of them, first we are building a list of included modules that contains all dependencies (those can be a lot!) and then iterate over that list to verify a condition. Maybe it is not much of a problem, but I still wonder if we could get rid of some iterations. I am especially worried about the innermost iteration.
In the past, we had serious performance issues in the chunk assignment, but those only showed in very large projects where performance suddenly broke down completely. The solution at that time was to use BigInt bits as sets of indices and then use bitwise &
and |
to join and intersect those sets as needed. I wonder if this problem can also be expressed as set operations, maybe by creating some a-priori maps earlier.
So I want to better understand the issue first. The problem occurs if a module A awaits a dynamic import of a module B which statically depends on a module C that is merged into the same chunk as A.
---
title: Initial dependency graph
---
graph LR
A -. awaits .-> B
B --> C
Without the dynamic import optimization for already loaded modules, this can only happen if A and C have the same dependent entries. As "entries" include both static and dynamic entries, this means both A and C need to have A and B as entries. But then we already have a cycle in the initial modules, and nobody can expect us to "fix" that somehow magically.
---
title: Only an initial cycle will always merge A and C
---
graph LR
subgraph chunk
A
C
end
A -. awaits .-> B
B --> C
A --> C
B -- cycle --> A
However, as you correctly observed, the optimization for already loaded modules can cause our issue because it removes the dependent entry B from C.
---
title: Cycle with removed dependent entry B
---
graph LR
subgraph chunk
A
C
end
A -. awaits .-> B
B --> C
A --> C
In general, this issue can happen if
- module A awaits the dynamic import of a module B that depends on a module C through a chain of static and awaited dynamic imports, i.e. importing A waits for C to be imported
- A and C have the same dependenEntries after optimization
In such a scenario we can avoid that C ends up in the same chunk as A by NOT removing the last awaited dynamic import B2 in the chain from the dependent entries of C. Above, B2 = B, but more generally
---
title: General case where A waits for C
---
graph LR
A -. awaits .-> B
B --> M
M -. awaits .-> B2
B2 -- "must be retained as dependent entry" --> C
A --> C
subgraph chunk
B
M
end
Note that in the terminology of the algorithm, all modules with the same dependent entries are pre-grouped into "atoms". E.g. in the situation above, B and M are in the same atom. So we want to identify all atoms that would technically be already loaded when a dynamic import occurs, but where all imports are awaited. Those should then not be removed.
If we assume we have something like alreadyLoadedAtomsByEntry
, but only taking awaited dynamic imports into account, e.g. awaitedAlreadyLoadedAtomsByEntry
. Then removeUnnecessaryDependentEntries
could become
function removeUnnecessaryDependentEntries(
chunkAtoms: ModulesWithDependentEntries[],
alreadyLoadedAtomsByEntry: bigint[],
awaitedAlreadyLoadedAtomsByEntry: bigint[]
) {
// Remove entries from dependent entries if a chunk is already loaded without
// that entry. Do not remove already loaded atoms where all dynamic imports
// are awaited to avoid cycles in the output.
let chunkMask = 1n;
for (const { dependentEntries } of chunkAtoms) {
for (const entryIndex of dependentEntries) {
if (
(alreadyLoadedAtomsByEntry[entryIndex] & chunkMask) === chunkMask &&
(awaitedAlreadyLoadedAtomsByEntry[entryIndex] & chunkMask) === 0n
) {
dependentEntries.delete(entryIndex);
}
}
chunkMask <<= 1n;
}
}
To get awaitedAlreadyLoadedAtomsByEntry
, we could run the existing getAlreadyLoadedAtomsByEntry
function but use awaitedDynamicImportsByEntry
and dynamicallyDependentEntriesByAwaitedDynamicEntry
as input.
The awaitedDynamicImportsByEntry
should be easy to collect in analyzeModuleGraph
. To get dynamicallyDependentEntriesByAwaitedDynamicEntry
, we could write a function like getDynamicallyDependentEntriesByDynamicEntry
that gets awaitedDynamicEntries
as input but iterate over dynamicEntry.includedDirectTopLevelAwaitingDynamicImporters
. It could also be the same function if we provide a method as parameter that maps a dynamicEntry to an iterator of its dynamic importers.
5860147
to
b46e154
Compare
That's amazing!!! Thanks for your thorough research and professional solution! This really helps prevent performance degradation! I've updated it based on your idea! |
So it really worked 😇? Because I really did not try it out, and just hoped there was not flaw in the logic. Nice! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, this looks really good to me now! One last comment, but other than that I would say this is good to merge!
This PR has been released as part of rollup@4.36.0. You can test it via |
Should this also be considered fixing #4708? I re-ran the repro with the latest Rollup version and it seems to be resolved. |
Thanks for the reminder. Issue closed. |
This PR contains:
Are tests included?
Breaking Changes?
List any relevant issue numbers:
resolves #5774
Description
At the moment, this PR only resolves top-level await that is not inside a function body. Therefore, it does not currently handle
await import()
expressions within async functions that are called with top-level await. I'm considering how to make Rollup aware of these scenarios, but it will likely take some time.