-
Notifications
You must be signed in to change notification settings - Fork 4.4k
🐛 Bug Report: very slow migration #5891
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi! |
@AidenY69, what versions are you upgrading between?
When running migrations, the database schema and documents are updated and Appwrite must iterate over everything. |
Right now I am using these PowerShell scripts because I need to export data into JSON format. That method is simple and guarantee the data to be readable, combinable from deferent sources and storable in Git |
@joeyouss I think JSON format export feature will be quite us 8000 eful for AppWrite CLI for (const collection of collections) {
console.log(`Making backup of ${collection.$id}`);
await fs.mkdir(`backup/databases/${database.$id}/${collection.$id}`, {
recursive: true,
});
for await (const document of listDocuments(database.$id, collection.$id)) {
console.log(
`Writing ${document.$id} from ${collection.$id} from ${database.$id}`
);
await fs.writeFile(
`backup/databases/${database.$id}/${collection.$id}/${document.$id}.json`,
JSON.stringify(document, null, 2)
);
}
} |
@tripolskypetr Is this still relevant? |
It not even worked all that time. Do you got manual tester in the team? Or just using fake unit tests so nobody know will it suit on the production |
@tripolskypetr There is a team of volunteer QA testers from the community that test features and releases manually prior to production releases. I think that in previous versions, it was needed to iterate in some cases over all the documents inside a collection since the database scheme changed, so it could take a lot of time. Anyways I think there should be a feature to migrate with 0 downtime so you can have the migration running in the background so it doesn't affects your app or cause disruption. |
I suggested this: So that way migrations don't block the entire instance meanwhile they are being processed. |
In the case of a migration that requires changes to a extremely large database and has an active amount of users (0 downtime is a must), it's possible to use a load balance in front of the instance, create a copy of the instance or the database, migrate all the data, and then once the old data was migrated, migrate the newest data and keep it synced (the data that was added meanwhile the migrations were being processed) with functions for example. I understand it's not an easy approach and that should be somewhat automatic or in a more efficient way. |
This issue has been labeled as a 'question', indicating that it requires additional information from the requestor. It has been inactive for 7 days. If no further activity occurs, this issue will be closed in 14 days. |
This issue has been closed due to inactivity. If you still require assistance, please provide the requested information. |
👟 Reproduction steps
When upgrading my current AppWrite to the newest version, the migration takes multiple days.
👍 Expected behavior
Faster migration regardless of record size
👎 Actual Behavior
When upgrading my current AppWrite to the newest version, the migration takes multiple days. I have ~30 million records, but I feel like that shouldn't be impacting the migration since the database itself isn't changing, just the AppWrite code. Is there any solutions on how I can make this process faster, or is there something on AppWrite's end that you can do to optimize large migrations?
The platform has to be down during a migration, and with a 4 day estimation, that's pretty extreme for a scheduled downtime.
🎲 Appwrite version
Version 1.3.x
💻 Operating system
Linux
🧱 Your Environment
No response
👀 Have you spent some time to check if this issue has been raised before?
🏢 Have you read the Code of Conduct?
The text was updated successfully, but these errors were encountered: