-
Notifications
You must be signed in to change notification settings - Fork 90
Fix lsn
for KEEPALIVE action.
#164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix lsn
for KEEPALIVE action.
#164
Conversation
Draft as I will write a unit test when no activity on source happens (sort of like edge case). Also, I have a question about some other tests (cdc/*) I see that Keepalive is in the expected JSON. How is every test run emitting the Keepalive at the same position relative to other actions? |
My reading of the Postgres source code makes me think that this PR is okay, we actually want to advance to WalEnd when receiving a keepalive message. See function WalSndUpdateProgress in src/backend/replication/walsender.c:
About the test cases and the Keepalive (and other messages) position, we just skip comparing them because each run may provide a different one... |
Thanks for confirming that this fix is correct. I've published the PR since this change is important and I'm couldn't finish the unit test I mentioned. I will share them in a separate PR once I am done. |
It seems you have added an extra KeepAlive message in some of the tests output, and the expected file has not been updated by your PR so tests are failing. Please see about updating the expected tests results. |
64ee757
to
b6a34c2
Compare
Oh right, I have updated the PR. |
In a Postgres-to-Timescale migration, pgcopydb currently executes VACUUM ANALYZE at the hypertable level, which processes the underlying chunks serially. For hypertables with a significant number of chunks, this approach can be time-consuming. This commit disables pgcopydb’s default vacuum operation and instead performs vacuuming at the chunk level using parallel workers. This change allows for better utilization of available resources on the target system, speeding up the overall process. Parallel vacuum analyze at chunk level: ``` Completed Vacuum analyze tables in 23 seconds ``` Vacuum analyze on hypertable: ``` vacuum analyze metrics_tmp ; VACUUM Time: 35636.559 ms (00:35.637) ``` The parallel vacuum analyze is 34% better! Fixes https://linear.app/timescale/issue/DAT-66/pg-to-ts-vaccum-analyze-via-pgcopydb-is-slow-for-hypertables-with-many Signed-off-by: Arunprasad Rajkumar <ar.arunprasad@gmail.com>
06347f0 Run VACUUM ANALYZE in Parallel for Hypertable Chunks (dimitri#164) e59f88b Fix ctid based same table copy concurrency (dimitri#165) 57908cc Make queries compatible with PG12 (dimitri#163) Signed-off-by: Arunprasad Rajkumar <ar.arunprasad@gmail.com>
It fixes an issue under the following conditions,
endpos
is set tocurrent
LSN.endpos
isKEEPALIVE
.in this case, the written
lsn
that corresponds toKEEPALIVE
isn't correct, i.e. less thanendpos
. This ultimately leads to Apply process waiting for a statement>= endpos
which never happens, hence it never terminates.