8000 avoiding top level await circular by TrickyPi · Pull Request #5843 · rollup/rollup · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

avoiding top level await circular #5843

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 17, 2025
Merged

Conversation

TrickyPi
Copy link
Member

This PR contains:

  • bugfix
  • feature
  • refactor
  • documentation
  • other

Are tests included?

  • yes (bugfixes and features will not be merged without tests)
  • no

Breaking Changes?

  • yes (breaking changes will not be merged unless absolutely necessary)
  • no

List any relevant issue numbers:
resolves #5774

Description

At the moment, this PR only resolves top-level await that is not inside a function body. Therefore, it does not currently handle await import() expressions within async functions that are called with top-level await. I'm considering how to make Rollup aware of these scenarios, but it will likely take some time.

Copy link
vercel bot commented Feb 15, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Na 8000 me Status Preview Comments Updated (UTC)
rollup ✅ Ready (Inspect) Visit Preview 💬 Add feedback Mar 17, 2025 5:53am

Copy link
github-actions bot commented Feb 15, 2025

Thank you for your contribution! ❤️

You can try out this pull request locally by installing Rollup via

npm install rollup/rollup#fix/top-level-await-circular

Notice: Ensure you have installed the latest stable Rust toolchain. If you haven't installed it yet, please see https://www.rust-lang.org/tools/install to learn how to download Rustup and install Rust.

or load it into the REPL:
https://rollup-armlo8eer-rollup-js.vercel.app/repl/?pr=5843

Copy link
github-actions bot commented Feb 15, 2025

Performance report

  • BUILD: 7239ms, 744 MB
    • initialize: 0ms, 28.1 MB
    • generate module graph: 2788ms, 559 MB
      • generate ast: 1255ms, 554 MB
    • sort and bind modules: 395ms, 604 MB
    • mark included statements: 4055ms, 744 MB
      • treeshaking pass 1: 2388ms, 744 MB
      • treeshaking pass 2: 467ms, 742 MB
      • treeshaking pass 3: 400ms, 745 MB
      • treeshaking pass 4: 393ms, 750 MB
      • treeshaking pass 5: 389ms, 744 MB
  • GENERATE: 726ms, 987 MB
    • initialize render: 0ms, 883 MB
    • generate chunks: 69ms, 894 MB
      • optimize chunks: 0ms, 889 MB
    • render chunks: 638ms, 963 MB
    • transform chunks: 16ms, 987 MB
    • generate bundle: 0ms, 987 MB

Copy link
codecov bot commented Feb 15, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.55%. Comparing base (7710d69) to head (3a69626).
Report is 3 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #5843   +/-   ##
=======================================
  Coverage   98.54%   98.55%           
=======================================
  Files         269      269           
  Lines        8550     8571   +21     
  Branches     1467     1469    +2     
=======================================
+ Hits         8426     8447   +21     
  Misses         92       92           
  Partials       32       32           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and 8000 limiting bundle sizes in JS merges.

Copy link
Member
@lukastaegert lukastaegert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! Nice work, I am still in the process of reviewing as I am trying to understand everything, and also if there could be performance issues. Some small comments so far.

@TrickyPi
Copy link
Member Author

Thank you so much for your review and the advice on variable names! I've updated them. 😊

Copy link
Member
@lukastaegert lukastaegert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi! So the reason I am slightly hesitant to merge this PR is that I am worried about performance. We are already in a loop over each chunkAtom where we then loop over each dependent entry.
Now inside the loop, we are adding a loop over the iterated entries (which is on average half as the number of dependent entries), and for each of them, first we are building a list of included modules that contains all dependencies (those can be a lot!) and then iterate over that list to verify a condition. Maybe it is not much of a problem, but I still wonder if we could get rid of some iterations. I am especially worried about the innermost iteration.

In the past, we had serious performance issues in the chunk assignment, but those only showed in very large projects where performance suddenly broke down completely. The solution at that time was to use BigInt bits as sets of indices and then use bitwise & and | to join and intersect those sets as needed. I wonder if this problem can also be expressed as set operations, maybe by creating some a-priori maps earlier.

So I want to better understand the issue first. The problem occurs if a module A awaits a dynamic import of a module B which statically depends on a module C that is merged into the same chunk as A.

---
title: Initial dependency graph
---
graph LR
    A -. awaits .-> B
    B --> C
Loading

Without the dynamic import optimization for already loaded modules, this can only happen if A and C have the same dependent entries. As "entries" include both static and dynamic entries, this means both A and C need to have A and B as entries. But then we already have a cycle in the initial modules, and nobody can expect us to "fix" that somehow magically.

---
title: Only an initial cycle will always merge A and C
---
graph LR
    subgraph chunk
        A
        C
    end
    A -. awaits .-> B
    B --> C
    A --> C
    B -- cycle --> A
Loading

However, as you correctly observed, the optimization for already loaded modules can cause our issue because it removes the dependent entry B from C.

---
title: Cycle with removed dependent entry B
---
graph LR
    subgraph chunk
        A
        C
    end
    A -. awaits .-> B
    B --> C
    A --> C
Loading

In general, this issue can happen if

  • module A awaits the dynamic import of a module B that depends on a module C through a chain of static and awaited dynamic imports, i.e. importing A waits for C to be imported
  • A and C have the same dependenEntries after optimization
    In such a scenario we can avoid that C ends up in the same chunk as A by NOT removing the last awaited dynamic import B2 in the chain from the dependent entries of C. Above, B2 = B, but more generally
---
title: General case where A waits for C
---
graph LR
    A -. awaits .-> B
    B --> M
    M -. awaits .-> B2
    B2 -- "must be retained as dependent entry" --> C
    A --> C
    subgraph chunk
        B
        M
    end
Loading

Note that in the terminology of the algorithm, all modules with the same dependent entries are pre-grouped into "atoms". E.g. in the situation above, B and M are in the same atom. So we want to identify all atoms that would technically be already loaded when a dynamic import occurs, but where all imports are awaited. Those should then not be removed.

If we assume we have something like alreadyLoadedAtomsByEntry, but only taking awaited dynamic imports into account, e.g. awaitedAlreadyLoadedAtomsByEntry. Then removeUnnecessaryDependentEntries could become

function removeUnnecessaryDependentEntries(
	chunkAtoms: ModulesWithDependentEntries[],
	alreadyLoadedAtomsByEntry: bigint[],
	awaitedAlreadyLoadedAtomsByEntry: bigint[]
) {
	// Remove entries from dependent entries if a chunk is already loaded without
	// that entry. Do not remove already loaded atoms where all dynamic imports
	// are awaited to avoid cycles in the output.
	let chunkMask = 1n;
	for (const { dependentEntries } of chunkAtoms) {
		for (const entryIndex of dependentEntries) {
			if (
				(alreadyLoadedAtomsByEntry[entryIndex] & chunkMask) === chunkMask &&
				(awaitedAlreadyLoadedAtomsByEntry[entryIndex] & chunkMask) === 0n
			) {
				dependentEntries.delete(entryIndex);
			}
		}
		chunkMask <<= 1n;
	}
}

To get awaitedAlreadyLoadedAtomsByEntry, we could run the existing getAlreadyLoadedAtomsByEntry function but use awaitedDynamicImportsByEntry and dynamicallyDependentEntriesByAwaitedDynamicEntry as input.

The awaitedDynamicImportsByEntry should be easy to collect in analyzeModuleGraph. To get dynamicallyDependentEntriesByAwaitedDynamicEntry , we could write a function like getDynamicallyDependentEntriesByDynamicEntry that gets awaitedDynamicEntries as input but iterate over dynamicEntry.includedDirectTopLevelAwaitingDynamicImporters. It could also be the same function if we provide a method as parameter that maps a dynamicEntry to an iterator of its dynamic importers.

@TrickyPi
Copy link
Member Author
TrickyPi commented Mar 11, 2025

That's amazing!!! Thanks for your thorough research and professional solution! This really helps prevent performance degradation! I've updated it based on your idea!

@lukastaegert
Copy link
Member

I've updated it based on your idea!

So it really worked 😇? Because I really did not try it out, and just hoped there was not flaw in the logic. Nice!

Copy link
Member
@lukastaegert lukastaegert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this looks really good to me now! One last comment, but other than that I would say this is good to merge!

@lukastaegert lukastaegert enabled auto-merge March 17, 2025 05:52
@lukastaegert lukastaegert added this pull request to the merge queue Mar 17, 2025
Merged via the queue into master with commit 8c15e59 Mar 17, 2025
41 checks passed
@lukastaegert lukastaegert deleted the fix/top-level-await-circular branch March 17, 2025 06:48
Copy link

This PR has been released as part of rollup@4.36.0. You can test it via npm install rollup.

@bluwy
Copy link
Contributor
bluwy commented Mar 17, 2025

Should this also be considered fixing #4708?

I re-ran the repro with the latest Rollup version and it seems to be resolved.

@TrickyPi
Copy link
Member Author

Thanks for the reminder. Issue closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

The code built by the top-level await import() has circular references
3 participants
0