Age Owner Branch data TLA Line data Source code
1 : : /*-------------------------------------------------------------------------
2 : : * worker.c
3 : : * PostgreSQL logical replication worker (apply)
4 : : *
5 : : * Copyright (c) 2016-2025, PostgreSQL Global Development Group
6 : : *
7 : : * IDENTIFICATION
8 : : * src/backend/replication/logical/worker.c
9 : : *
10 : : * NOTES
11 : : * This file contains the worker which applies logical changes as they come
12 : : * from remote logical replication stream.
13 : : *
14 : : * The main worker (apply) is started by logical replication worker
15 : : * launcher for every enabled subscription in a database. It uses
16 : : * walsender protocol to communicate with publisher.
17 : : *
18 : : * This module includes server facing code and shares libpqwalreceiver
19 : : * module with walreceiver for providing the libpq specific functionality.
20 : : *
21 : : *
22 : : * STREAMED TRANSACTIONS
23 : : * ---------------------
24 : : * Streamed transactions (large transactions exceeding a memory limit on the
25 : : * upstream) are applied using one of two approaches:
26 : : *
27 : : * 1) Write to temporary files and apply when the final commit arrives
28 : : *
29 : : * This approach is used when the user has set the subscription's streaming
30 : : * option as on.
31 : : *
32 : : * Unlike the regular (non-streamed) case, handling streamed transactions has
33 : : * to handle aborts of both the toplevel transaction and subtransactions. This
34 : : * is achieved by tracking offsets for subtransactions, which is then used
35 : : * to truncate the file with serialized changes.
36 : : *
37 : : * The files are placed in tmp file directory by default, and the filenames
38 : : * include both the XID of the toplevel transaction and OID of the
39 : : * subscription. This is necessary so that different workers processing a
40 : : * remote transaction with the same XID doesn't interfere.
41 : : *
42 : : * We use BufFiles instead of using normal temporary files because (a) the
43 : : * BufFile infrastructure supports temporary files that exceed the OS file size
44 : : * limit, (b) provides a way for automatic clean up on the error and (c) provides
45 : : * a way to survive these files across local transactions and allow to open and
46 : : * close at stream start and close. We decided to use FileSet
47 : : * infrastructure as without that it deletes the files on the closure of the
48 : : * file and if we decide to keep stream files open across the start/stop stream
49 : : * then it will consume a lot of memory (more than 8K for each BufFile and
50 : : * there could be multiple such BufFiles as the subscriber could receive
51 : : * multiple start/stop streams for different transactions before getting the
52 : : * commit). Moreover, if we don't use FileSet then we also need to invent
53 : : * a new way to pass filenames to BufFile APIs so that we are allowed to open
54 : : * the file we desired across multiple stream-open calls for the same
55 : : * transaction.
56 : : *
57 : : * 2) Parallel apply workers.
58 : : *
59 : : * This approach is used when the user has set the subscription's streaming
60 : : * option as parallel. See logical/applyparallelworker.c for information about
61 : : * this approach.
62 : : *
63 : : * TWO_PHASE TRANSACTIONS
64 : : * ----------------------
65 : : * Two phase transactions are replayed at prepare and then committed or
66 : : * rolled back at commit prepared and rollback prepared respectively. It is
67 : : * possible to have a prepared transaction that arrives at the apply worker
68 : : * when the tablesync is busy doing the initial copy. In this case, the apply
69 : : * worker skips all the prepared operations [e.g. inserts] while the tablesync
70 : : * is still busy (see the condition of should_apply_changes_for_rel). The
71 : : * tablesync worker might not get such a prepared transaction because say it
72 : : * was prior to the initial consistent point but might have got some later
73 : : * commits. Now, the tablesync worker will exit without doing anything for the
74 : : * prepared transaction skipped by the apply worker as the sync location for it
75 : : * will be already ahead of the apply worker's current location. This would lead
76 : : * to an "empty prepare", because later when the apply worker does the commit
77 : : * prepare, there is nothing in it (the inserts were skipped earlier).
78 : : *
79 : : * To avoid this, and similar prepare confusions the subscription's two_phase
80 : : * commit is enabled only after the initial sync is over. The two_phase option
81 : : * has been implemented as a tri-state with values DISABLED, PENDING, and
82 : : * ENABLED.
83 : : *
84 : : * Even if the user specifies they want a subscription with two_phase = on,
85 : : * internally it will start with a tri-state of PENDING which only becomes
86 : : * ENABLED after all tablesync initializations are completed - i.e. when all
87 : : * tablesync workers have reached their READY state. In other words, the value
88 : : * PENDING is only a temporary state for subscription start-up.
89 : : *
90 : : * Until the two_phase is properly available (ENABLED) the subscription will
91 : : * behave as if two_phase = off. When the apply worker detects that all
92 : : * tablesyncs have become READY (while the tri-state was PENDING) it will
93 : : * restart the apply worker process. This happens in
94 : : * ProcessSyncingTablesForApply.
95 : : *
96 : : * When the (re-started) apply worker finds that all tablesyncs are READY for a
97 : : * two_phase tri-state of PENDING it start streaming messages with the
98 : : * two_phase option which in turn enables the decoding of two-phase commits at
99 : : * the publisher. Then, it updates the tri-state value from PENDING to ENABLED.
100 : : * Now, it is possible that during the time we have not enabled two_phase, the
101 : : * publisher (replication server) would have skipped some prepares but we
102 : : * ensure that such prepares are sent along with commit prepare, see
103 : : * ReorderBufferFinishPrepared.
104 : : *
105 : : * If the subscription has no tables then a two_phase tri-state PENDING is
106 : : * left unchanged. This lets the user still do an ALTER SUBSCRIPTION REFRESH
107 : : * PUBLICATION which might otherwise be disallowed (see below).
108 : : *
109 : : * If ever a user needs to be aware of the tri-state value, they can fetch it
110 : : * from the pg_subscription catalog (see column subtwophasestate).
111 : : *
112 : : * Finally, to avoid problems mentioned in previous paragraphs from any
113 : : * subsequent (not READY) tablesyncs (need to toggle two_phase option from 'on'
114 : : * to 'off' and then again back to 'on') there is a restriction for
115 : : * ALTER SUBSCRIPTION REFRESH PUBLICATION. This command is not permitted when
116 : : * the two_phase tri-state is ENABLED, except when copy_data = false.
117 : : *
118 : : * We can get prepare of the same GID more than once for the genuine cases
119 : : * where we have defined multiple subscriptions for publications on the same
120 : : * server and prepared transaction has operations on tables subscribed to those
121 : : * subscriptions. For such cases, if we use the GID sent by publisher one of
122 : : * the prepares will be successful and others will fail, in which case the
123 : : * server will send them again. Now, this can lead to a deadlock if user has
124 : : * set synchronous_standby_names for all the subscriptions on subscriber. To
125 : : * avoid such deadlocks, we generate a unique GID (consisting of the
126 : : * subscription oid and the xid of the prepared transaction) for each prepare
127 : : * transaction on the subscriber.
128 : : *
129 : : * FAILOVER
130 : : * ----------------------
131 : : * The logical slot on the primary can be synced to the standby by specifying
132 : : * failover = true when creating the subscription. Enabling failover allows us
133 : : * to smoothly transition to the promoted standby, ensuring that we can
134 : : * subscribe to the new primary without losing any data.
135 : : *
136 : : * RETAIN DEAD TUPLES
137 : : * ----------------------
138 : : * Each apply worker that enabled retain_dead_tuples option maintains a
139 : : * non-removable transaction ID (oldest_nonremovable_xid) in shared memory to
140 : : * prevent dead rows from being removed prematurely when the apply worker still
141 : : * needs them to detect update_deleted conflicts. Additionally, this helps to
142 : : * retain the required commit_ts module information, which further helps to
143 : : * detect update_origin_differs and delete_origin_differs conflicts reliably, as
144 : : * otherwise, vacuum freeze could remove the required information.
145 : : *
146 : : * The logical replication launcher manages an internal replication slot named
147 : : * "pg_conflict_detection". It asynchronously aggregates the non-removable
148 : : * transaction ID from all apply workers to determine the appropriate xmin for
149 : : * the slot, thereby retaining necessary tuples.
150 : : *
151 : : * The non-removable transaction ID in the apply worker is advanced to the
152 : : * oldest running transaction ID once all concurrent transactions on the
153 : : * publisher have been applied and flushed locally. The process involves:
154 : : *
155 : : * - RDT_GET_CANDIDATE_XID:
156 : : * Call GetOldestActiveTransactionId() to take oldestRunningXid as the
157 : : * candidate xid.
158 : : *
159 : : * - RDT_REQUEST_PUBLISHER_STATUS:
160 : : * Send a message to the walsender requesting the publisher status, which
161 : : * includes the latest WAL write position and information about transactions
162 : : * that are in the commit phase.
163 : : *
164 : : * - RDT_WAIT_FOR_PUBLISHER_STATUS:
165 : : * Wait for the status from the walsender. After receiving the first status,
166 : : * do not proceed if there are concurrent remote transactions that are still
167 : : * in the commit phase. These transactions might have been assigned an
168 : : * earlier commit timestamp but have not yet written the commit WAL record.
169 : : * Continue to request the publisher status (RDT_REQUEST_PUBLISHER_STATUS)
170 : : * until all these transactions have completed.
171 : : *
172 : : * - RDT_WAIT_FOR_LOCAL_FLUSH:
173 : : * Advance the non-removable transaction ID if the current flush location has
174 : : * reached or surpassed the last received WAL position.
175 : : *
176 : : * - RDT_STOP_CONFLICT_INFO_RETENTION:
177 : : * This phase is required only when max_retention_duration is defined. We
178 : : * enter this phase if the wait time in either the
179 : : * RDT_WAIT_FOR_PUBLISHER_STATUS or RDT_WAIT_FOR_LOCAL_FLUSH phase exceeds
180 : : * configured max_retention_duration. In this phase,
181 : : * pg_subscription.subretentionactive is updated to false within a new
182 : : * transaction, and oldest_nonremovable_xid is set to InvalidTransactionId.
183 : : *
184 : : * - RDT_RESUME_CONFLICT_INFO_RETENTION:
185 : : * This phase is required only when max_retention_duration is defined. We
186 : : * enter this phase if the retention was previously stopped, and the time
187 : : * required to advance the non-removable transaction ID in the
188 : : * RDT_WAIT_FOR_LOCAL_FLUSH phase has decreased to within acceptable limits
189 : : * (or if max_retention_duration is set to 0). During this phase,
190 : : * pg_subscription.subretentionactive is updated to true within a new
191 : : * transaction, and the worker will be restarted.
192 : : *
193 : : * The overall state progression is: GET_CANDIDATE_XID ->
194 : : * REQUEST_PUBLISHER_STATUS -> WAIT_FOR_PUBLISHER_STATUS -> (loop to
195 : : * REQUEST_PUBLISHER_STATUS till concurrent remote transactions end) ->
196 : : * WAIT_FOR_LOCAL_FLUSH -> loop back to GET_CANDIDATE_XID.
197 : : *
198 : : * Retaining the dead tuples for this period is sufficient for ensuring
199 : : * eventual consistency using last-update-wins strategy, as dead tuples are
200 : : * useful for detecting conflicts only during the application of concurrent
201 : : * transactions from remote nodes. After applying and flushing all remote
202 : : * transactions that occurred concurrently with the tuple DELETE, any
203 : : * subsequent UPDATE from a remote node should have a later timestamp. In such
204 : : * cases, it is acceptable to detect an update_missing scenario and convert the
205 : : * UPDATE to an INSERT when applying it. But, for concurrent remote
206 : : * transactions with earlier timestamps than the DELETE, detecting
207 : : * update_deleted is necessary, as the UPDATEs in remote transactions should be
208 : : * ignored if their timestamp is earlier than that of the dead tuples.
209 : : *
210 : : * Note that advancing the non-removable transaction ID is not supported if the
211 : : * publisher is also a physical standby. This is because the logical walsender
212 : : * on the standby can only get the WAL replay position but there may be more
213 : : * WALs that are being replicated from the primary and those WALs could have
214 : : * earlier commit timestamp.
215 : : *
216 : : * Similarly, when the publisher has subscribed to another publisher,
217 : : * information necessary for conflict detection cannot be retained for
218 : : * changes from origins other than the publisher. This is because publisher
219 : : * lacks the information on concurrent transactions of other publishers to
220 : : * which it subscribes. As the information on concurrent transactions is
221 : : * unavailable beyond subscriber's immediate publishers, the non-removable
222 : : * transaction ID might be advanced prematurely before changes from other
223 : : * origins have been fully applied.
224 : : *
225 : : * XXX Retaining information for changes from other origins might be possible
226 : : * by requesting the subscription on that origin to enable retain_dead_tuples
227 : : * and fetching the conflict detection slot.xmin along with the publisher's
228 : : * status. In the RDT_WAIT_FOR_PUBLISHER_STATUS phase, the apply worker could
229 : : * wait for the remote slot's xmin to reach the oldest active transaction ID,
230 : : * ensuring that all transactions from other origins have been applied on the
231 : : * publisher, thereby getting the latest WAL position that includes all
232 : : * concurrent changes. However, this approach may impact performance, so it
233 : : * might not worth the effort.
234 : : *
235 : : * XXX It seems feasible to get the latest commit's WAL location from the
236 : : * publisher and wait till that is applied. However, we can't do that
237 : : * because commit timestamps can regress as a commit with a later LSN is not
238 : : * guaranteed to have a later timestamp than those with earlier LSNs. Having
239 : : * said that, even if that is possible, it won't improve performance much as
240 : : * the apply always lag and moves slowly as compared with the transactions
241 : : * on the publisher.
242 : : *-------------------------------------------------------------------------
243 : : */
244 : :
245 : : #include "postgres.h"
246 : :
247 : : #include <sys/stat.h>
248 : : #include <unistd.h>
249 : :
250 : : #include "access/commit_ts.h"
251 : : #include "access/table.h"
252 : : #include "access/tableam.h"
253 : : #include "access/twophase.h"
254 : : #include "access/xact.h"
255 : : #include "catalog/indexing.h"
256 : : #include "catalog/pg_inherits.h"
257 : : #include "catalog/pg_subscription.h"
258 : : #include "catalog/pg_subscription_rel.h"
259 : : #include "commands/subscriptioncmds.h"
260 : : #include "commands/tablecmds.h"
261 : : #include "commands/trigger.h"
262 : : #include "executor/executor.h"
263 : : #include "executor/execPartition.h"
264 : : #include "libpq/pqformat.h"
265 : : #include "miscadmin.h"
266 : : #include "optimizer/optimizer.h"
267 : : #include "parser/parse_relation.h"
268 : : #include "pgstat.h"
269 : : #include "postmaster/bgworker.h"
270 : : #include "postmaster/interrupt.h"
271 : : #include "postmaster/walwriter.h"
272 : : #include "replication/conflict.h"
273 : : #include "replication/logicallauncher.h"
274 : : #include "replication/logicalproto.h"
275 : : #include "replication/logicalrelation.h"
276 : : #include "replication/logicalworker.h"
277 : : #include "replication/origin.h"
278 : : #include "replication/slot.h"
279 : : #include "replication/walreceiver.h"
280 : : #include "replication/worker_internal.h"
281 : : #include "rewrite/rewriteHandler.h"
282 : : #include "storage/buffile.h"
283 : : #include "storage/ipc.h"
284 : : #include "storage/lmgr.h"
285 : : #include "storage/procarray.h"
286 : : #include "tcop/tcopprot.h"
287 : : #include "utils/acl.h"
288 : : #include "utils/guc.h"
289 : : #include "utils/inval.h"
290 : : #include "utils/lsyscache.h"
291 : : #include "utils/memutils.h"
292 : : #include "utils/pg_lsn.h"
293 : : #include "utils/rel.h"
294 : : #include "utils/rls.h"
295 : : #include "utils/snapmgr.h"
296 : : #include "utils/syscache.h"
297 : : #include "utils/usercontext.h"
298 : :
299 : : #define NAPTIME_PER_CYCLE 1000 /* max sleep time between cycles (1s) */
300 : :
301 : : typedef struct FlushPosition
302 : : {
303 : : dlist_node node;
304 : : XLogRecPtr local_end;
305 : : XLogRecPtr remote_end;
306 : : } FlushPosition;
307 : :
308 : : static dlist_head lsn_mapping = DLIST_STATIC_INIT(lsn_mapping);
309 : :
310 : : typedef struct ApplyExecutionData
311 : : {
312 : : EState *estate; /* executor state, used to track resources */
313 : :
314 : : LogicalRepRelMapEntry *targetRel; /* replication target rel */
315 : : ResultRelInfo *targetRelInfo; /* ResultRelInfo for same */
316 : :
317 : : /* These fields are used when the target relation is partitioned: */
318 : : ModifyTableState *mtstate; /* dummy ModifyTable state */
319 : : PartitionTupleRouting *proute; /* partition routing info */
320 : : } ApplyExecutionData;
321 : :
322 : : /* Struct for saving and restoring apply errcontext information */
323 : : typedef struct ApplyErrorCallbackArg
324 : : {
325 : : LogicalRepMsgType command; /* 0 if invalid */
326 : : LogicalRepRelMapEntry *rel;
327 : :
328 : : /* Remote node information */
329 : : int remote_attnum; /* -1 if invalid */
330 : : TransactionId remote_xid;
331 : : XLogRecPtr finish_lsn;
332 : : char *origin_name;
333 : : } ApplyErrorCallbackArg;
334 : :
335 : : /*
336 : : * The action to be taken for the changes in the transaction.
337 : : *
338 : : * TRANS_LEADER_APPLY:
339 : : * This action means that we are in the leader apply worker or table sync
340 : : * worker. The changes of the transaction are either directly applied or
341 : : * are read from temporary files (for streaming transactions) and then
342 : : * applied by the worker.
343 : : *
344 : : * TRANS_LEADER_SERIALIZE:
345 : : * This action means that we are in the leader apply worker or table sync
346 : : * worker. Changes are written to temporary files and then applied when the
347 : : * final commit arrives.
348 : : *
349 : : * TRANS_LEADER_SEND_TO_PARALLEL:
350 : : * This action means that we are in the leader apply worker and need to send
351 : : * the changes to the parallel apply worker.
352 : : *
353 : : * TRANS_LEADER_PARTIAL_SERIALIZE:
354 : : * This action means that we are in the leader apply worker and have sent some
355 : : * changes directly to the parallel apply worker and the remaining changes are
356 : : * serialized to a file, due to timeout while sending data. The parallel apply
357 : : * worker will apply these serialized changes when the final commit arrives.
358 : : *
359 : : * We can't use TRANS_LEADER_SERIALIZE for this case because, in addition to
360 : : * serializing changes, the leader worker also needs to serialize the
361 : : * STREAM_XXX message to a file, and wait for the parallel apply worker to
362 : : * finish the transaction when processing the transaction finish command. So
363 : : * this new action was introduced to keep the code and logic clear.
364 : : *
365 : : * TRANS_PARALLEL_APPLY:
366 : : * This action means that we are in the parallel apply worker and changes of
367 : : * the transaction are applied directly by the worker.
368 : : */
369 : : typedef enum
370 : : {
371 : : /* The action for non-streaming transactions. */
372 : : TRANS_LEADER_APPLY,
373 : :
374 : : /* Actions for streaming transactions. */
375 : : TRANS_LEADER_SERIALIZE,
376 : : TRANS_LEADER_SEND_TO_PARALLEL,
377 : : TRANS_LEADER_PARTIAL_SERIALIZE,
378 : : TRANS_PARALLEL_APPLY,
379 : : } TransApplyAction;
380 : :
381 : : /*
382 : : * The phases involved in advancing the non-removable transaction ID.
383 : : *
384 : : * See comments atop worker.c for details of the transition between these
385 : : * phases.
386 : : */
387 : : typedef enum
388 : : {
389 : : RDT_GET_CANDIDATE_XID,
390 : : RDT_REQUEST_PUBLISHER_STATUS,
391 : : RDT_WAIT_FOR_PUBLISHER_STATUS,
392 : : RDT_WAIT_FOR_LOCAL_FLUSH,
393 : : RDT_STOP_CONFLICT_INFO_RETENTION,
394 : : RDT_RESUME_CONFLICT_INFO_RETENTION,
395 : : } RetainDeadTuplesPhase;
396 : :
397 : : /*
398 : : * Critical information for managing phase transitions within the
399 : : * RetainDeadTuplesPhase.
400 : : */
401 : : typedef struct RetainDeadTuplesData
402 : : {
403 : : RetainDeadTuplesPhase phase; /* current phase */
404 : : XLogRecPtr remote_lsn; /* WAL write position on the publisher */
405 : :
406 : : /*
407 : : * Oldest transaction ID that was in the commit phase on the publisher.
408 : : * Use FullTransactionId to prevent issues with transaction ID wraparound,
409 : : * where a new remote_oldestxid could falsely appear to originate from the
410 : : * past and block advancement.
411 : : */
412 : : FullTransactionId remote_oldestxid;
413 : :
414 : : /*
415 : : * Next transaction ID to be assigned on the publisher. Use
416 : : * FullTransactionId for consistency and to allow straightforward
417 : : * comparisons with remote_oldestxid.
418 : : */
419 : : FullTransactionId remote_nextxid;
420 : :
421 : : TimestampTz reply_time; /* when the publisher responds with status */
422 : :
423 : : /*
424 : : * Publisher transaction ID that must be awaited to complete before
425 : : * entering the final phase (RDT_WAIT_FOR_LOCAL_FLUSH). Use
426 : : * FullTransactionId for the same reason as remote_nextxid.
427 : : */
428 : : FullTransactionId remote_wait_for;
429 : :
430 : : TransactionId candidate_xid; /* candidate for the non-removable
431 : : * transaction ID */
432 : : TimestampTz flushpos_update_time; /* when the remote flush position was
433 : : * updated in final phase
434 : : * (RDT_WAIT_FOR_LOCAL_FLUSH) */
435 : :
436 : : long table_sync_wait_time; /* time spent waiting for table sync
437 : : * to finish */
438 : :
439 : : /*
440 : : * The following fields are used to determine the timing for the next
441 : : * round of transaction ID advancement.
442 : : */
443 : : TimestampTz last_recv_time; /* when the last message was received */
444 : : TimestampTz candidate_xid_time; /* when the candidate_xid is decided */
445 : : int xid_advance_interval; /* how much time (ms) to wait before
446 : : * attempting to advance the
447 : : * non-removable transaction ID */
448 : : } RetainDeadTuplesData;
449 : :
450 : : /*
451 : : * The minimum (100ms) and maximum (3 minutes) intervals for advancing
452 : : * non-removable transaction IDs. The maximum interval is a bit arbitrary but
453 : : * is sufficient to not cause any undue network traffic.
454 : : */
455 : : #define MIN_XID_ADVANCE_INTERVAL 100
456 : : #define MAX_XID_ADVANCE_INTERVAL 180000
457 : :
458 : : /* errcontext tracker */
459 : : static ApplyErrorCallbackArg apply_error_callback_arg =
460 : : {
461 : : .command = 0,
462 : : .rel = NULL,
463 : : .remote_attnum = -1,
464 : : .remote_xid = InvalidTransactionId,
465 : : .finish_lsn = InvalidXLogRecPtr,
466 : : .origin_name = NULL,
467 : : };
468 : :
469 : : ErrorContextCallback *apply_error_context_stack = NULL;
470 : :
471 : : MemoryContext ApplyMessageContext = NULL;
472 : : MemoryContext ApplyContext = NULL;
473 : :
474 : : /* per stream context for streaming transactions */
475 : : static MemoryContext LogicalStreamingContext = NULL;
476 : :
477 : : WalReceiverConn *LogRepWorkerWalRcvConn = NULL;
478 : :
479 : : Subscription *MySubscription = NULL;
480 : : static bool MySubscriptionValid = false;
481 : :
482 : : static List *on_commit_wakeup_workers_subids = NIL;
483 : :
484 : : bool in_remote_transaction = false;
485 : : static XLogRecPtr remote_final_lsn = InvalidXLogRecPtr;
486 : :
487 : : /* fields valid only when processing streamed transaction */
488 : : static bool in_streamed_transaction = false;
489 : :
490 : : static TransactionId stream_xid = InvalidTransactionId;
491 : :
492 : : /*
493 : : * The number of changes applied by parallel apply worker during one streaming
494 : : * block.
495 : : */
496 : : static uint32 parallel_stream_nchanges = 0;
497 : :
498 : : /* Are we initializing an apply worker? */
499 : : bool InitializingApplyWorker = false;
500 : :
501 : : /*
502 : : * We enable skipping all data modification changes (INSERT, UPDATE, etc.) for
503 : : * the subscription if the remote transaction's finish LSN matches the subskiplsn.
504 : : * Once we start skipping changes, we don't stop it until we skip all changes of
505 : : * the transaction even if pg_subscription is updated and MySubscription->skiplsn
506 : : * gets changed or reset during that. Also, in streaming transaction cases (streaming = on),
507 : : * we don't skip receiving and spooling the changes since we decide whether or not
508 : : * to skip applying the changes when starting to apply changes. The subskiplsn is
509 : : * cleared after successfully skipping the transaction or applying non-empty
510 : : * transaction. The latter prevents the mistakenly specified subskiplsn from
511 : : * being left. Note that we cannot skip the streaming transactions when using
512 : : * parallel apply workers because we cannot get the finish LSN before applying
513 : : * the changes. So, we don't start parallel apply worker when finish LSN is set
514 : : * by the user.
515 : : */
516 : : static XLogRecPtr skip_xact_finish_lsn = InvalidXLogRecPtr;
517 : : #define is_skipping_changes() (unlikely(!XLogRecPtrIsInvalid(skip_xact_finish_lsn)))
518 : :
519 : : /* BufFile handle of the current streaming file */
520 : : static BufFile *stream_fd = NULL;
521 : :
522 : : /*
523 : : * The remote WAL position that has been applied and flushed locally. We record
524 : : * and use this information both while sending feedback to the server and
525 : : * advancing oldest_nonremovable_xid.
526 : : */
527 : : static XLogRecPtr last_flushpos = InvalidXLogRecPtr;
528 : :
529 : : typedef struct SubXactInfo
530 : : {
531 : : TransactionId xid; /* XID of the subxact */
532 : : int fileno; /* file number in the buffile */
533 : : off_t offset; /* offset in the file */
534 : : } SubXactInfo;
535 : :
536 : : /* Sub-transaction data for the current streaming transaction */
537 : : typedef struct ApplySubXactData
538 : : {
539 : : uint32 nsubxacts; /* number of sub-transactions */
540 : : uint32 nsubxacts_max; /* current capacity of subxacts */
541 : : TransactionId subxact_last; /* xid of the last sub-transaction */
542 : : SubXactInfo *subxacts; /* sub-xact offset in changes file */
543 : : } ApplySubXactData;
544 : :
545 : : static ApplySubXactData subxact_data = {0, 0, InvalidTransactionId, NULL};
546 : :
547 : : static inline void subxact_filename(char *path, Oid subid, TransactionId xid);
548 : : static inline void changes_filename(char *path, Oid subid, TransactionId xid);
549 : :
550 : : /*
551 : : * Information about subtransactions of a given toplevel transaction.
552 : : */
553 : : static void subxact_info_write(Oid subid, TransactionId xid);
554 : : static void subxact_info_read(Oid subid, TransactionId xid);
555 : : static void subxact_info_add(TransactionId xid);
556 : : static inline void cleanup_subxact_info(void);
557 : :
558 : : /*
559 : : * Serialize and deserialize changes for a toplevel transaction.
560 : : */
561 : : static void stream_open_file(Oid subid, TransactionId xid,
562 : : bool first_segment);
563 : : static void stream_write_change(char action, StringInfo s);
564 : : static void stream_open_and_write_change(TransactionId xid, char action, StringInfo s);
565 : : static void stream_close_file(void);
566 : :
567 : : static void send_feedback(XLogRecPtr recvpos, bool force, bool requestReply);
568 : :
569 : : static void maybe_advance_nonremovable_xid(RetainDeadTuplesData *rdt_data,
570 : : bool status_received);
571 : : static bool can_advance_nonremovable_xid(RetainDeadTuplesData *rdt_data);
572 : : static void process_rdt_phase_transition(RetainDeadTuplesData *rdt_data,
573 : : bool status_received);
574 : : static void get_candidate_xid(RetainDeadTuplesData *rdt_data);
575 : : static void request_publisher_status(RetainDeadTuplesData *rdt_data);
576 : : static void wait_for_publisher_status(RetainDeadTuplesData *rdt_data,
577 : : bool status_received);
578 : : static void wait_for_local_flush(RetainDeadTuplesData *rdt_data);
579 : : static bool should_stop_conflict_info_retention(RetainDeadTuplesData *rdt_data);
580 : : static void stop_conflict_info_retention(RetainDeadTuplesData *rdt_data);
581 : : static void resume_conflict_info_retention(RetainDeadTuplesData *rdt_data);
582 : : static bool update_retention_status(bool active);
583 : : static void reset_retention_data_fields(RetainDeadTuplesData *rdt_data);
584 : : static void adjust_xid_advance_interval(RetainDeadTuplesData *rdt_data,
585 : : bool new_xid_found);
586 : :
587 : : static void apply_worker_exit(void);
588 : :
589 : : static void apply_handle_commit_internal(LogicalRepCommitData *commit_data);
590 : : static void apply_handle_insert_internal(ApplyExecutionData *edata,
591 : : ResultRelInfo *relinfo,
592 : : TupleTableSlot *remoteslot);
593 : : static void apply_handle_update_internal(ApplyExecutionData *edata,
594 : : ResultRelInfo *relinfo,
595 : : TupleTableSlot *remoteslot,
596 : : LogicalRepTupleData *newtup,
597 : : Oid localindexoid);
598 : : static void apply_handle_delete_internal(ApplyExecutionData *edata,
599 : : ResultRelInfo *relinfo,
600 : : TupleTableSlot *remoteslot,
601 : : Oid localindexoid);
602 : : static bool FindReplTupleInLocalRel(ApplyExecutionData *edata, Relation localrel,
603 : : LogicalRepRelation *remoterel,
604 : : Oid localidxoid,
605 : : TupleTableSlot *remoteslot,
606 : : TupleTableSlot **localslot);
607 : : static bool FindDeletedTupleInLocalRel(Relation localrel,
608 : : Oid localidxoid,
609 : : TupleTableSlot *remoteslot,
610 : : TransactionId *delete_xid,
611 : : RepOriginId *delete_origin,
612 : : TimestampTz *delete_time);
613 : : static void apply_handle_tuple_routing(ApplyExecutionData *edata,
614 : : TupleTableSlot *remoteslot,
615 : : LogicalRepTupleData *newtup,
616 : : CmdType operation);
617 : :
618 : : /* Functions for skipping changes */
619 : : static void maybe_start_skipping_changes(XLogRecPtr finish_lsn);
620 : : static void stop_skipping_changes(void);
621 : : static void clear_subscription_skip_lsn(XLogRecPtr finish_lsn);
622 : :
623 : : /* Functions for apply error callback */
624 : : static inline void set_apply_error_context_xact(TransactionId xid, XLogRecPtr lsn);
625 : : static inline void reset_apply_error_context_info(void);
626 : :
627 : : static TransApplyAction get_transaction_apply_action(TransactionId xid,
628 : : ParallelApplyWorkerInfo **winfo);
629 : :
630 : : static void replorigin_reset(int code, Datum arg);
631 : :
632 : : /*
633 : : * Form the origin name for the subscription.
634 : : *
635 : : * This is a common function for tablesync and other workers. Tablesync workers
636 : : * must pass a valid relid. Other callers must pass relid = InvalidOid.
637 : : *
638 : : * Return the name in the supplied buffer.
639 : : */
640 : : void
1113 akapila@postgresql.o 641 :CBC 1307 : ReplicationOriginNameForLogicalRep(Oid suboid, Oid relid,
642 : : char *originname, Size szoriginname)
643 : : {
644 [ + + ]: 1307 : if (OidIsValid(relid))
645 : : {
646 : : /* Replication origin name for tablesync workers. */
647 : 749 : snprintf(originname, szoriginname, "pg_%u_%u", suboid, relid);
648 : : }
649 : : else
650 : : {
651 : : /* Replication origin name for non-tablesync workers. */
652 : 558 : snprintf(originname, szoriginname, "pg_%u", suboid);
653 : : }
654 : 1307 : }
655 : :
656 : : /*
657 : : * Should this worker apply changes for given relation.
658 : : *
659 : : * This is mainly needed for initial relation data sync as that runs in
660 : : * separate worker process running in parallel and we need some way to skip
661 : : * changes coming to the leader apply worker during the sync of a table.
662 : : *
663 : : * Note we need to do smaller or equals comparison for SYNCDONE state because
664 : : * it might hold position of end of initial slot consistent point WAL
665 : : * record + 1 (ie start of next record) and next record can be COMMIT of
666 : : * transaction we are now processing (which is what we set remote_final_lsn
667 : : * to in apply_handle_begin).
668 : : *
669 : : * Note that for streaming transactions that are being applied in the parallel
670 : : * apply worker, we disallow applying changes if the target table in the
671 : : * subscription is not in the READY state, because we cannot decide whether to
672 : : * apply the change as we won't know remote_final_lsn by that time.
673 : : *
674 : : * We already checked this in pa_can_start() before assigning the
675 : : * streaming transaction to the parallel worker, but it also needs to be
676 : : * checked here because if the user executes ALTER SUBSCRIPTION ... REFRESH
677 : : * PUBLICATION in parallel, the new table can be added to pg_subscription_rel
678 : : * while applying this transaction.
679 : : */
680 : : static bool
3141 peter_e@gmx.net 681 : 148185 : should_apply_changes_for_rel(LogicalRepRelMapEntry *rel)
682 : : {
798 akapila@postgresql.o 683 [ - + + - : 148185 : switch (MyLogicalRepWorker->type)
- ]
684 : : {
798 akapila@postgresql.o 685 :UBC 0 : case WORKERTYPE_TABLESYNC:
686 : 0 : return MyLogicalRepWorker->relid == rel->localreloid;
687 : :
798 akapila@postgresql.o 688 :CBC 68423 : case WORKERTYPE_PARALLEL_APPLY:
689 : : /* We don't synchronize rel's that are in unknown state. */
690 [ - + ]: 68423 : if (rel->state != SUBREL_STATE_READY &&
798 akapila@postgresql.o 691 [ # # ]:UBC 0 : rel->state != SUBREL_STATE_UNKNOWN)
692 [ # # ]: 0 : ereport(ERROR,
693 : : (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
694 : : errmsg("logical replication parallel apply worker for subscription \"%s\" will stop",
695 : : MySubscription->name),
696 : : errdetail("Cannot handle streamed replication transactions using parallel apply workers until all tables have been synchronized.")));
697 : :
798 akapila@postgresql.o 698 :CBC 68423 : return rel->state == SUBREL_STATE_READY;
699 : :
700 : 79762 : case WORKERTYPE_APPLY:
701 [ + + ]: 79838 : return (rel->state == SUBREL_STATE_READY ||
702 [ + + ]: 76 : (rel->state == SUBREL_STATE_SYNCDONE &&
703 [ + - ]: 27 : rel->statelsn <= remote_final_lsn));
704 : :
798 akapila@postgresql.o 705 :UBC 0 : case WORKERTYPE_UNKNOWN:
706 : : /* Should never happen. */
707 [ # # ]: 0 : elog(ERROR, "Unknown worker type");
708 : : }
709 : :
710 : 0 : return false; /* dummy for compiler */
711 : : }
712 : :
713 : : /*
714 : : * Begin one step (one INSERT, UPDATE, etc) of a replication transaction.
715 : : *
716 : : * Start a transaction, if this is the first step (else we keep using the
717 : : * existing transaction).
718 : : * Also provide a global snapshot and ensure we run in ApplyMessageContext.
719 : : */
720 : : static void
1601 tgl@sss.pgh.pa.us 721 :CBC 148641 : begin_replication_step(void)
722 : : {
723 : 148641 : SetCurrentStatementStartTimestamp();
724 : :
725 [ + + ]: 148641 : if (!IsTransactionState())
726 : : {
727 : 968 : StartTransactionCommand();
728 : 968 : maybe_reread_subscription();
729 : : }
730 : :
731 : 148638 : PushActiveSnapshot(GetTransactionSnapshot());
732 : :
3094 peter_e@gmx.net 733 : 148638 : MemoryContextSwitchTo(ApplyMessageContext);
1601 tgl@sss.pgh.pa.us 734 : 148638 : }
735 : :
736 : : /*
737 : : * Finish up one step of a replication transaction.
738 : : * Callers of begin_replication_step() must also call this.
739 : : *
740 : : * We don't close out the transaction here, but we should increment
741 : : * the command counter to make the effects of this step visible.
742 : : */
743 : : static void
744 : 148590 : end_replication_step(void)
745 : : {
746 : 148590 : PopActiveSnapshot();
747 : :
748 : 148590 : CommandCounterIncrement();
3204 peter_e@gmx.net 749 : 148590 : }
750 : :
751 : : /*
752 : : * Handle streamed transactions for both the leader apply worker and the
753 : : * parallel apply workers.
754 : : *
755 : : * In the streaming case (receiving a block of the streamed transaction), for
756 : : * serialize mode, simply redirect it to a file for the proper toplevel
757 : : * transaction, and for parallel mode, the leader apply worker will send the
758 : : * changes to parallel apply workers and the parallel apply worker will define
759 : : * savepoints if needed. (LOGICAL_REP_MSG_RELATION or LOGICAL_REP_MSG_TYPE
760 : : * messages will be applied by both leader apply worker and parallel apply
761 : : * workers).
762 : : *
763 : : * Returns true for streamed transactions (when the change is either serialized
764 : : * to file or sent to parallel apply worker), false otherwise (regular mode or
765 : : * needs to be processed by parallel apply worker).
766 : : *
767 : : * Exception: If the message being processed is LOGICAL_REP_MSG_RELATION
768 : : * or LOGICAL_REP_MSG_TYPE, return false even if the message needs to be sent
769 : : * to a parallel apply worker.
770 : : */
771 : : static bool
1797 akapila@postgresql.o 772 : 324519 : handle_streamed_transaction(LogicalRepMsgType action, StringInfo s)
773 : : {
774 : : TransactionId current_xid;
775 : : ParallelApplyWorkerInfo *winfo;
776 : : TransApplyAction apply_action;
777 : : StringInfoData original_msg;
778 : :
1023 779 : 324519 : apply_action = get_transaction_apply_action(stream_xid, &winfo);
780 : :
781 : : /* not in streaming mode */
782 [ + + ]: 324519 : if (apply_action == TRANS_LEADER_APPLY)
1881 783 : 80164 : return false;
784 : :
785 [ - + ]: 244355 : Assert(TransactionIdIsValid(stream_xid));
786 : :
787 : : /*
788 : : * The parallel apply worker needs the xid in this message to decide
789 : : * whether to define a savepoint, so save the original message that has
790 : : * not moved the cursor after the xid. We will serialize this message to a
791 : : * file in PARTIAL_SERIALIZE mode.
792 : : */
1023 793 : 244355 : original_msg = *s;
794 : :
795 : : /*
796 : : * We should have received XID of the subxact as the first part of the
797 : : * message, so extract it.
798 : : */
799 : 244355 : current_xid = pq_getmsgint(s, 4);
800 : :
801 [ - + ]: 244355 : if (!TransactionIdIsValid(current_xid))
1599 tgl@sss.pgh.pa.us 802 [ # # ]:UBC 0 : ereport(ERROR,
803 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
804 : : errmsg_internal("invalid transaction ID in streamed replication transaction")));
805 : :
1023 akapila@postgresql.o 806 [ + + + + :CBC 244355 : switch (apply_action)
- ]
807 : : {
808 : 102512 : case TRANS_LEADER_SERIALIZE:
809 [ - + ]: 102512 : Assert(stream_fd);
810 : :
811 : : /* Add the new subxact to the array (unless already there). */
812 : 102512 : subxact_info_add(current_xid);
813 : :
814 : : /* Write the change to the current file */
815 : 102512 : stream_write_change(action, s);
816 : 102512 : return true;
817 : :
818 : 68386 : case TRANS_LEADER_SEND_TO_PARALLEL:
819 [ - + ]: 68386 : Assert(winfo);
820 : :
821 : : /*
822 : : * XXX The publisher side doesn't always send relation/type update
823 : : * messages after the streaming transaction, so also update the
824 : : * relation/type in leader apply worker. See function
825 : : * cleanup_rel_sync_cache.
826 : : */
827 [ + - ]: 68386 : if (pa_send_data(winfo, s->len, s->data))
828 [ + + + - ]: 68386 : return (action != LOGICAL_REP_MSG_RELATION &&
829 : : action != LOGICAL_REP_MSG_TYPE);
830 : :
831 : : /*
832 : : * Switch to serialize mode when we are not able to send the
833 : : * change to parallel apply worker.
834 : : */
1023 akapila@postgresql.o 835 :UBC 0 : pa_switch_to_partial_serialize(winfo, false);
836 : :
837 : : /* fall through */
1023 akapila@postgresql.o 838 :CBC 5006 : case TRANS_LEADER_PARTIAL_SERIALIZE:
839 : 5006 : stream_write_change(action, &original_msg);
840 : :
841 : : /* Same reason as TRANS_LEADER_SEND_TO_PARALLEL case. */
842 [ + + + - ]: 5006 : return (action != LOGICAL_REP_MSG_RELATION &&
843 : : action != LOGICAL_REP_MSG_TYPE);
844 : :
845 : 68451 : case TRANS_PARALLEL_APPLY:
846 : 68451 : parallel_stream_nchanges += 1;
847 : :
848 : : /* Define a savepoint for a subxact if needed. */
849 : 68451 : pa_start_subtrans(current_xid, stream_xid);
850 : 68451 : return false;
851 : :
1023 akapila@postgresql.o 852 :UBC 0 : default:
918 msawada@postgresql.o 853 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
854 : : return false; /* silence compiler warning */
855 : : }
856 : : }
857 : :
858 : : /*
859 : : * Executor state preparation for evaluation of constraint expressions,
860 : : * indexes and triggers for the specified relation.
861 : : *
862 : : * Note that the caller must open and close any indexes to be updated.
863 : : */
864 : : static ApplyExecutionData *
1620 tgl@sss.pgh.pa.us 865 :CBC 148107 : create_edata_for_relation(LogicalRepRelMapEntry *rel)
866 : : {
867 : : ApplyExecutionData *edata;
868 : : EState *estate;
869 : : RangeTblEntry *rte;
967 870 : 148107 : List *perminfos = NIL;
871 : : ResultRelInfo *resultRelInfo;
872 : :
1620 873 : 148107 : edata = (ApplyExecutionData *) palloc0(sizeof(ApplyExecutionData));
874 : 148107 : edata->targetRel = rel;
875 : :
876 : 148107 : edata->estate = estate = CreateExecutorState();
877 : :
3204 peter_e@gmx.net 878 : 148107 : rte = makeNode(RangeTblEntry);
879 : 148107 : rte->rtekind = RTE_RELATION;
880 : 148107 : rte->relid = RelationGetRelid(rel->localrel);
881 : 148107 : rte->relkind = rel->localrel->rd_rel->relkind;
2585 tgl@sss.pgh.pa.us 882 : 148107 : rte->rellockmode = AccessShareLock;
883 : :
967 884 : 148107 : addRTEPermissionInfo(&perminfos, rte);
885 : :
263 amitlan@postgresql.o 886 : 148107 : ExecInitRangeTable(estate, list_make1(rte), perminfos,
887 : : bms_make_singleton(1));
888 : :
1620 tgl@sss.pgh.pa.us 889 : 148107 : edata->targetRelInfo = resultRelInfo = makeNode(ResultRelInfo);
890 : :
891 : : /*
892 : : * Use Relation opened by logicalrep_rel_open() instead of opening it
893 : : * again.
894 : : */
895 : 148107 : InitResultRelInfo(resultRelInfo, rel->localrel, 1, NULL, 0);
896 : :
897 : : /*
898 : : * We put the ResultRelInfo in the es_opened_result_relations list, even
899 : : * though we don't populate the es_result_relations array. That's a bit
900 : : * bogus, but it's enough to make ExecGetTriggerResultRel() find them.
901 : : *
902 : : * ExecOpenIndices() is not called here either, each execution path doing
903 : : * an apply operation being responsible for that.
904 : : */
1650 michael@paquier.xyz 905 : 148107 : estate->es_opened_result_relations =
1620 tgl@sss.pgh.pa.us 906 : 148107 : lappend(estate->es_opened_result_relations, resultRelInfo);
907 : :
2897 simon@2ndQuadrant.co 908 : 148107 : estate->es_output_cid = GetCurrentCommandId(true);
909 : :
910 : : /* Prepare to catch AFTER triggers. */
3161 peter_e@gmx.net 911 : 148107 : AfterTriggerBeginQuery();
912 : :
913 : : /* other fields of edata remain NULL for now */
914 : :
1620 tgl@sss.pgh.pa.us 915 : 148107 : return edata;
916 : : }
917 : :
918 : : /*
919 : : * Finish any operations related to the executor state created by
920 : : * create_edata_for_relation().
921 : : */
922 : : static void
923 : 148068 : finish_edata(ApplyExecutionData *edata)
924 : : {
925 : 148068 : EState *estate = edata->estate;
926 : :
927 : : /* Handle any queued AFTER triggers. */
1650 michael@paquier.xyz 928 : 148068 : AfterTriggerEndQuery(estate);
929 : :
930 : : /* Shut down tuple routing, if any was done. */
1620 tgl@sss.pgh.pa.us 931 [ + + ]: 148068 : if (edata->proute)
932 : 74 : ExecCleanupTupleRouting(edata->mtstate, edata->proute);
933 : :
934 : : /*
935 : : * Cleanup. It might seem that we should call ExecCloseResultRelations()
936 : : * here, but we intentionally don't. It would close the rel we added to
937 : : * es_opened_result_relations above, which is wrong because we took no
938 : : * corresponding refcount. We rely on ExecCleanupTupleRouting() to close
939 : : * any other relations opened during execution.
940 : : */
1650 michael@paquier.xyz 941 : 148068 : ExecResetTupleTable(estate->es_tupleTable, false);
942 : 148068 : FreeExecutorState(estate);
1620 tgl@sss.pgh.pa.us 943 : 148068 : pfree(edata);
1650 michael@paquier.xyz 944 : 148068 : }
945 : :
946 : : /*
947 : : * Executes default values for columns for which we can't map to remote
948 : : * relation columns.
949 : : *
950 : : * This allows us to support tables which have more columns on the downstream
951 : : * than on the upstream.
952 : : */
953 : : static void
3204 peter_e@gmx.net 954 : 75835 : slot_fill_defaults(LogicalRepRelMapEntry *rel, EState *estate,
955 : : TupleTableSlot *slot)
956 : : {
957 : 75835 : TupleDesc desc = RelationGetDescr(rel->localrel);
958 : 75835 : int num_phys_attrs = desc->natts;
959 : : int i;
960 : : int attnum,
961 : 75835 : num_defaults = 0;
962 : : int *defmap;
963 : : ExprState **defexprs;
964 : : ExprContext *econtext;
965 : :
966 [ + - ]: 75835 : econtext = GetPerTupleExprContext(estate);
967 : :
968 : : /* We got all the data via replication, no need to evaluate anything. */
969 [ + + ]: 75835 : if (num_phys_attrs == rel->remoterel.natts)
970 : 35690 : return;
971 : :
972 : 40145 : defmap = (int *) palloc(num_phys_attrs * sizeof(int));
973 : 40145 : defexprs = (ExprState **) palloc(num_phys_attrs * sizeof(ExprState *));
974 : :
2141 michael@paquier.xyz 975 [ - + ]: 40145 : Assert(rel->attrmap->maplen == num_phys_attrs);
3204 peter_e@gmx.net 976 [ + + ]: 210663 : for (attnum = 0; attnum < num_phys_attrs; attnum++)
977 : : {
6 drowley@postgresql.o 978 :GNC 170518 : CompactAttribute *cattr = TupleDescCompactAttr(desc, attnum);
979 : : Expr *defexpr;
980 : :
981 [ + - + + ]: 170518 : if (cattr->attisdropped || cattr->attgenerated)
3204 peter_e@gmx.net 982 :CBC 9 : continue;
983 : :
2141 michael@paquier.xyz 984 [ + + ]: 170509 : if (rel->attrmap->attnums[attnum] >= 0)
3204 peter_e@gmx.net 985 : 92268 : continue;
986 : :
987 : 78241 : defexpr = (Expr *) build_column_default(rel->localrel, attnum + 1);
988 : :
989 [ + + ]: 78241 : if (defexpr != NULL)
990 : : {
991 : : /* Run the expression through planner */
992 : 70131 : defexpr = expression_planner(defexpr);
993 : :
994 : : /* Initialize executable expression in copycontext */
995 : 70131 : defexprs[num_defaults] = ExecInitExpr(defexpr, NULL);
996 : 70131 : defmap[num_defaults] = attnum;
997 : 70131 : num_defaults++;
998 : : }
999 : : }
1000 : :
1001 [ + + ]: 110276 : for (i = 0; i < num_defaults; i++)
1002 : 70131 : slot->tts_values[defmap[i]] =
1003 : 70131 : ExecEvalExpr(defexprs[i], econtext, &slot->tts_isnull[defmap[i]]);
1004 : : }
1005 : :
1006 : : /*
1007 : : * Store tuple data into slot.
1008 : : *
1009 : : * Incoming data can be either text or binary format.
1010 : : */
1011 : : static void
1928 tgl@sss.pgh.pa.us 1012 : 148134 : slot_store_data(TupleTableSlot *slot, LogicalRepRelMapEntry *rel,
1013 : : LogicalRepTupleData *tupleData)
1014 : : {
3086 bruce@momjian.us 1015 : 148134 : int natts = slot->tts_tupleDescriptor->natts;
1016 : : int i;
1017 : :
3204 peter_e@gmx.net 1018 : 148134 : ExecClearTuple(slot);
1019 : :
1020 : : /* Call the "in" function for each non-dropped, non-null attribute */
2141 michael@paquier.xyz 1021 [ - + ]: 148134 : Assert(natts == rel->attrmap->maplen);
3204 peter_e@gmx.net 1022 [ + + ]: 657842 : for (i = 0; i < natts; i++)
1023 : : {
2991 andres@anarazel.de 1024 : 509708 : Form_pg_attribute att = TupleDescAttr(slot->tts_tupleDescriptor, i);
2141 michael@paquier.xyz 1025 : 509708 : int remoteattnum = rel->attrmap->attnums[i];
1026 : :
1928 tgl@sss.pgh.pa.us 1027 [ + + + + ]: 509708 : if (!att->attisdropped && remoteattnum >= 0)
3204 peter_e@gmx.net 1028 : 302791 : {
1928 tgl@sss.pgh.pa.us 1029 : 302791 : StringInfo colvalue = &tupleData->colvalues[remoteattnum];
1030 : :
1926 1031 [ - + ]: 302791 : Assert(remoteattnum < tupleData->ncols);
1032 : :
1033 : : /* Set attnum for error callback */
1523 akapila@postgresql.o 1034 : 302791 : apply_error_callback_arg.remote_attnum = remoteattnum;
1035 : :
1928 tgl@sss.pgh.pa.us 1036 [ + + ]: 302791 : if (tupleData->colstatus[remoteattnum] == LOGICALREP_COLUMN_TEXT)
1037 : : {
1038 : : Oid typinput;
1039 : : Oid typioparam;
1040 : :
1041 : 142393 : getTypeInputInfo(att->atttypid, &typinput, &typioparam);
1042 : 284786 : slot->tts_values[i] =
1043 : 142393 : OidInputFunctionCall(typinput, colvalue->data,
1044 : : typioparam, att->atttypmod);
1045 : 142393 : slot->tts_isnull[i] = false;
1046 : : }
1047 [ + + ]: 160398 : else if (tupleData->colstatus[remoteattnum] == LOGICALREP_COLUMN_BINARY)
1048 : : {
1049 : : Oid typreceive;
1050 : : Oid typioparam;
1051 : :
1052 : : /*
1053 : : * In some code paths we may be asked to re-parse the same
1054 : : * tuple data. Reset the StringInfo's cursor so that works.
1055 : : */
1056 : 110038 : colvalue->cursor = 0;
1057 : :
1058 : 110038 : getTypeBinaryInputInfo(att->atttypid, &typreceive, &typioparam);
1059 : 220076 : slot->tts_values[i] =
1060 : 110038 : OidReceiveFunctionCall(typreceive, colvalue,
1061 : : typioparam, att->atttypmod);
1062 : :
1063 : : /* Trouble if it didn't eat the whole buffer */
1064 [ - + ]: 110038 : if (colvalue->cursor != colvalue->len)
1928 tgl@sss.pgh.pa.us 1065 [ # # ]:UBC 0 : ereport(ERROR,
1066 : : (errcode(ERRCODE_INVALID_BINARY_REPRESENTATION),
1067 : : errmsg("incorrect binary data format in logical replication column %d",
1068 : : remoteattnum + 1)));
1928 tgl@sss.pgh.pa.us 1069 :CBC 110038 : slot->tts_isnull[i] = false;
1070 : : }
1071 : : else
1072 : : {
1073 : : /*
1074 : : * NULL value from remote. (We don't expect to see
1075 : : * LOGICALREP_COLUMN_UNCHANGED here, but if we do, treat it as
1076 : : * NULL.)
1077 : : */
1078 : 50360 : slot->tts_values[i] = (Datum) 0;
1079 : 50360 : slot->tts_isnull[i] = true;
1080 : : }
1081 : :
1082 : : /* Reset attnum for error callback */
1523 akapila@postgresql.o 1083 : 302791 : apply_error_callback_arg.remote_attnum = -1;
1084 : : }
1085 : : else
1086 : : {
1087 : : /*
1088 : : * We assign NULL to dropped attributes and missing values
1089 : : * (missing values should be later filled using
1090 : : * slot_fill_defaults).
1091 : : */
3204 peter_e@gmx.net 1092 : 206917 : slot->tts_values[i] = (Datum) 0;
1093 : 206917 : slot->tts_isnull[i] = true;
1094 : : }
1095 : : }
1096 : :
1097 : 148134 : ExecStoreVirtualTuple(slot);
1098 : 148134 : }
1099 : :
1100 : : /*
1101 : : * Replace updated columns with data from the LogicalRepTupleData struct.
1102 : : * This is somewhat similar to heap_modify_tuple but also calls the type
1103 : : * input functions on the user data.
1104 : : *
1105 : : * "slot" is filled with a copy of the tuple in "srcslot", replacing
1106 : : * columns provided in "tupleData" and leaving others as-is.
1107 : : *
1108 : : * Caution: unreplaced pass-by-ref columns in "slot" will point into the
1109 : : * storage for "srcslot". This is OK for current usage, but someday we may
1110 : : * need to materialize "slot" at the end to make it independent of "srcslot".
1111 : : */
1112 : : static void
1928 tgl@sss.pgh.pa.us 1113 : 31924 : slot_modify_data(TupleTableSlot *slot, TupleTableSlot *srcslot,
1114 : : LogicalRepRelMapEntry *rel,
1115 : : LogicalRepTupleData *tupleData)
1116 : : {
3086 bruce@momjian.us 1117 : 31924 : int natts = slot->tts_tupleDescriptor->natts;
1118 : : int i;
1119 : :
1120 : : /* We'll fill "slot" with a virtual tuple, so we must start with ... */
3204 peter_e@gmx.net 1121 : 31924 : ExecClearTuple(slot);
1122 : :
1123 : : /*
1124 : : * Copy all the column data from srcslot, so that we'll have valid values
1125 : : * for unreplaced columns.
1126 : : */
2167 tgl@sss.pgh.pa.us 1127 [ - + ]: 31924 : Assert(natts == srcslot->tts_tupleDescriptor->natts);
1128 : 31924 : slot_getallattrs(srcslot);
1129 : 31924 : memcpy(slot->tts_values, srcslot->tts_values, natts * sizeof(Datum));
1130 : 31924 : memcpy(slot->tts_isnull, srcslot->tts_isnull, natts * sizeof(bool));
1131 : :
1132 : : /* Call the "in" function for each replaced attribute */
2141 michael@paquier.xyz 1133 [ - + ]: 31924 : Assert(natts == rel->attrmap->maplen);
3204 peter_e@gmx.net 1134 [ + + ]: 159280 : for (i = 0; i < natts; i++)
1135 : : {
2991 andres@anarazel.de 1136 : 127356 : Form_pg_attribute att = TupleDescAttr(slot->tts_tupleDescriptor, i);
2141 michael@paquier.xyz 1137 : 127356 : int remoteattnum = rel->attrmap->attnums[i];
1138 : :
2916 peter_e@gmx.net 1139 [ + + ]: 127356 : if (remoteattnum < 0)
3204 1140 : 58519 : continue;
1141 : :
1926 tgl@sss.pgh.pa.us 1142 [ - + ]: 68837 : Assert(remoteattnum < tupleData->ncols);
1143 : :
1928 1144 [ + + ]: 68837 : if (tupleData->colstatus[remoteattnum] != LOGICALREP_COLUMN_UNCHANGED)
1145 : : {
1146 : 68834 : StringInfo colvalue = &tupleData->colvalues[remoteattnum];
1147 : :
1148 : : /* Set attnum for error callback */
1523 akapila@postgresql.o 1149 : 68834 : apply_error_callback_arg.remote_attnum = remoteattnum;
1150 : :
1928 tgl@sss.pgh.pa.us 1151 [ + + ]: 68834 : if (tupleData->colstatus[remoteattnum] == LOGICALREP_COLUMN_TEXT)
1152 : : {
1153 : : Oid typinput;
1154 : : Oid typioparam;
1155 : :
1156 : 25430 : getTypeInputInfo(att->atttypid, &typinput, &typioparam);
1157 : 50860 : slot->tts_values[i] =
1158 : 25430 : OidInputFunctionCall(typinput, colvalue->data,
1159 : : typioparam, att->atttypmod);
1160 : 25430 : slot->tts_isnull[i] = false;
1161 : : }
1162 [ + + ]: 43404 : else if (tupleData->colstatus[remoteattnum] == LOGICALREP_COLUMN_BINARY)
1163 : : {
1164 : : Oid typreceive;
1165 : : Oid typioparam;
1166 : :
1167 : : /*
1168 : : * In some code paths we may be asked to re-parse the same
1169 : : * tuple data. Reset the StringInfo's cursor so that works.
1170 : : */
1171 : 43356 : colvalue->cursor = 0;
1172 : :
1173 : 43356 : getTypeBinaryInputInfo(att->atttypid, &typreceive, &typioparam);
1174 : 86712 : slot->tts_values[i] =
1175 : 43356 : OidReceiveFunctionCall(typreceive, colvalue,
1176 : : typioparam, att->atttypmod);
1177 : :
1178 : : /* Trouble if it didn't eat the whole buffer */
1179 [ - + ]: 43356 : if (colvalue->cursor != colvalue->len)
1928 tgl@sss.pgh.pa.us 1180 [ # # ]:UBC 0 : ereport(ERROR,
1181 : : (errcode(ERRCODE_INVALID_BINARY_REPRESENTATION),
1182 : : errmsg("incorrect binary data format in logical replication column %d",
1183 : : remoteattnum + 1)));
1928 tgl@sss.pgh.pa.us 1184 :CBC 43356 : slot->tts_isnull[i] = false;
1185 : : }
1186 : : else
1187 : : {
1188 : : /* must be LOGICALREP_COLUMN_NULL */
1189 : 48 : slot->tts_values[i] = (Datum) 0;
1190 : 48 : slot->tts_isnull[i] = true;
1191 : : }
1192 : :
1193 : : /* Reset attnum for error callback */
1523 akapila@postgresql.o 1194 : 68834 : apply_error_callback_arg.remote_attnum = -1;
1195 : : }
1196 : : }
1197 : :
1198 : : /* And finally, declare that "slot" contains a valid virtual tuple */
3204 peter_e@gmx.net 1199 : 31924 : ExecStoreVirtualTuple(slot);
1200 : 31924 : }
1201 : :
1202 : : /*
1203 : : * Handle BEGIN message.
1204 : : */
1205 : : static void
1206 : 494 : apply_handle_begin(StringInfo s)
1207 : : {
1208 : : LogicalRepBeginData begin_data;
1209 : :
1210 : : /* There must not be an active streaming transaction. */
1015 akapila@postgresql.o 1211 [ - + ]: 494 : Assert(!TransactionIdIsValid(stream_xid));
1212 : :
3204 peter_e@gmx.net 1213 : 494 : logicalrep_read_begin(s, &begin_data);
1330 akapila@postgresql.o 1214 : 494 : set_apply_error_context_xact(begin_data.xid, begin_data.final_lsn);
1215 : :
3141 peter_e@gmx.net 1216 : 494 : remote_final_lsn = begin_data.final_lsn;
1217 : :
1316 akapila@postgresql.o 1218 : 494 : maybe_start_skipping_changes(begin_data.final_lsn);
1219 : :
3204 peter_e@gmx.net 1220 : 494 : in_remote_transaction = true;
1221 : :
1222 : 494 : pgstat_report_activity(STATE_RUNNING, NULL);
1223 : 494 : }
1224 : :
1225 : : /*
1226 : : * Handle COMMIT message.
1227 : : *
1228 : : * TODO, support tracking of multiple origins
1229 : : */
1230 : : static void
1231 : 445 : apply_handle_commit(StringInfo s)
1232 : : {
1233 : : LogicalRepCommitData commit_data;
1234 : :
1235 : 445 : logicalrep_read_commit(s, &commit_data);
1236 : :
1599 tgl@sss.pgh.pa.us 1237 [ - + ]: 445 : if (commit_data.commit_lsn != remote_final_lsn)
1599 tgl@sss.pgh.pa.us 1238 [ # # ]:UBC 0 : ereport(ERROR,
1239 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1240 : : errmsg_internal("incorrect commit LSN %X/%08X in commit message (expected %X/%08X)",
1241 : : LSN_FORMAT_ARGS(commit_data.commit_lsn),
1242 : : LSN_FORMAT_ARGS(remote_final_lsn))));
1243 : :
1551 akapila@postgresql.o 1244 :CBC 445 : apply_handle_commit_internal(&commit_data);
1245 : :
1246 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 1247 :GNC 445 : ProcessSyncingRelations(commit_data.end_lsn);
1248 : :
3204 peter_e@gmx.net 1249 :CBC 445 : pgstat_report_activity(STATE_IDLE, NULL);
1523 akapila@postgresql.o 1250 : 445 : reset_apply_error_context_info();
3204 peter_e@gmx.net 1251 : 445 : }
1252 : :
1253 : : /*
1254 : : * Handle BEGIN PREPARE message.
1255 : : */
1256 : : static void
1567 akapila@postgresql.o 1257 : 16 : apply_handle_begin_prepare(StringInfo s)
1258 : : {
1259 : : LogicalRepPreparedTxnData begin_data;
1260 : :
1261 : : /* Tablesync should never receive prepare. */
1262 [ - + ]: 16 : if (am_tablesync_worker())
1567 akapila@postgresql.o 1263 [ # # ]:UBC 0 : ereport(ERROR,
1264 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1265 : : errmsg_internal("tablesync worker received a BEGIN PREPARE message")));
1266 : :
1267 : : /* There must not be an active streaming transaction. */
1015 akapila@postgresql.o 1268 [ - + ]:CBC 16 : Assert(!TransactionIdIsValid(stream_xid));
1269 : :
1567 1270 : 16 : logicalrep_read_begin_prepare(s, &begin_data);
1330 1271 : 16 : set_apply_error_context_xact(begin_data.xid, begin_data.prepare_lsn);
1272 : :
1567 1273 : 16 : remote_final_lsn = begin_data.prepare_lsn;
1274 : :
1316 1275 : 16 : maybe_start_skipping_changes(begin_data.prepare_lsn);
1276 : :
1567 1277 : 16 : in_remote_transaction = true;
1278 : :
1279 : 16 : pgstat_report_activity(STATE_RUNNING, NULL);
1280 : 16 : }
1281 : :
1282 : : /*
1283 : : * Common function to prepare the GID.
1284 : : */
1285 : : static void
1552 1286 : 23 : apply_handle_prepare_internal(LogicalRepPreparedTxnData *prepare_data)
1287 : : {
1288 : : char gid[GIDSIZE];
1289 : :
1290 : : /*
1291 : : * Compute unique GID for two_phase transactions. We don't use GID of
1292 : : * prepared transaction sent by server as that can lead to deadlock when
1293 : : * we have multiple subscriptions from same node point to publications on
1294 : : * the same node. See comments atop worker.c
1295 : : */
1296 : 23 : TwoPhaseTransactionGid(MySubscription->oid, prepare_data->xid,
1297 : : gid, sizeof(gid));
1298 : :
1299 : : /*
1300 : : * BeginTransactionBlock is necessary to balance the EndTransactionBlock
1301 : : * called within the PrepareTransactionBlock below.
1302 : : */
1023 1303 [ + - ]: 23 : if (!IsTransactionBlock())
1304 : : {
1305 : 23 : BeginTransactionBlock();
1306 : 23 : CommitTransactionCommand(); /* Completes the preceding Begin command. */
1307 : : }
1308 : :
1309 : : /*
1310 : : * Update origin state so we can restart streaming from correct position
1311 : : * in case of crash.
1312 : : */
1552 1313 : 23 : replorigin_session_origin_lsn = prepare_data->end_lsn;
1314 : 23 : replorigin_session_origin_timestamp = prepare_data->prepare_time;
1315 : :
1316 : 23 : PrepareTransactionBlock(gid);
1317 : 23 : }
1318 : :
1319 : : /*
1320 : : * Handle PREPARE message.
1321 : : */
1322 : : static void
1567 1323 : 15 : apply_handle_prepare(StringInfo s)
1324 : : {
1325 : : LogicalRepPreparedTxnData prepare_data;
1326 : :
1327 : 15 : logicalrep_read_prepare(s, &prepare_data);
1328 : :
1329 [ - + ]: 15 : if (prepare_data.prepare_lsn != remote_final_lsn)
1567 akapila@postgresql.o 1330 [ # # ]:UBC 0 : ereport(ERROR,
1331 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1332 : : errmsg_internal("incorrect prepare LSN %X/%08X in prepare message (expected %X/%08X)",
1333 : : LSN_FORMAT_ARGS(prepare_data.prepare_lsn),
1334 : : LSN_FORMAT_ARGS(remote_final_lsn))));
1335 : :
1336 : : /*
1337 : : * Unlike commit, here, we always prepare the transaction even though no
1338 : : * change has happened in this transaction or all changes are skipped. It
1339 : : * is done this way because at commit prepared time, we won't know whether
1340 : : * we have skipped preparing a transaction because of those reasons.
1341 : : *
1342 : : * XXX, We can optimize such that at commit prepared time, we first check
1343 : : * whether we have prepared the transaction or not but that doesn't seem
1344 : : * worthwhile because such cases shouldn't be common.
1345 : : */
1567 akapila@postgresql.o 1346 :CBC 15 : begin_replication_step();
1347 : :
1552 1348 : 15 : apply_handle_prepare_internal(&prepare_data);
1349 : :
1567 1350 : 15 : end_replication_step();
1351 : 15 : CommitTransactionCommand();
1352 : 14 : pgstat_report_stat(false);
1353 : :
1354 : : /*
1355 : : * It is okay not to set the local_end LSN for the prepare because we
1356 : : * always flush the prepare record. So, we can send the acknowledgment of
1357 : : * the remote_end LSN as soon as prepare is finished.
1358 : : *
1359 : : * XXX For the sake of consistency with commit, we could have set it with
1360 : : * the LSN of prepare but as of now we don't track that value similar to
1361 : : * XactLastCommitEnd, and adding it for this purpose doesn't seems worth
1362 : : * it.
1363 : : */
445 1364 : 14 : store_flush_position(prepare_data.end_lsn, InvalidXLogRecPtr);
1365 : :
1567 1366 : 14 : in_remote_transaction = false;
1367 : :
1368 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 1369 :GNC 14 : ProcessSyncingRelations(prepare_data.end_lsn);
1370 : :
1371 : : /*
1372 : : * Since we have already prepared the transaction, in a case where the
1373 : : * server crashes before clearing the subskiplsn, it will be left but the
1374 : : * transaction won't be resent. But that's okay because it's a rare case
1375 : : * and the subskiplsn will be cleared when finishing the next transaction.
1376 : : */
1316 akapila@postgresql.o 1377 :CBC 14 : stop_skipping_changes();
1378 : 14 : clear_subscription_skip_lsn(prepare_data.prepare_lsn);
1379 : :
1567 1380 : 14 : pgstat_report_activity(STATE_IDLE, NULL);
1523 1381 : 14 : reset_apply_error_context_info();
1567 1382 : 14 : }
1383 : :
1384 : : /*
1385 : : * Handle a COMMIT PREPARED of a previously PREPARED transaction.
1386 : : *
1387 : : * Note that we don't need to wait here if the transaction was prepared in a
1388 : : * parallel apply worker. In that case, we have already waited for the prepare
1389 : : * to finish in apply_handle_stream_prepare() which will ensure all the
1390 : : * operations in that transaction have happened in the subscriber, so no
1391 : : * concurrent transaction can cause deadlock or transaction dependency issues.
1392 : : */
1393 : : static void
1394 : 20 : apply_handle_commit_prepared(StringInfo s)
1395 : : {
1396 : : LogicalRepCommitPreparedTxnData prepare_data;
1397 : : char gid[GIDSIZE];
1398 : :
1399 : 20 : logicalrep_read_commit_prepared(s, &prepare_data);
1330 1400 : 20 : set_apply_error_context_xact(prepare_data.xid, prepare_data.commit_lsn);
1401 : :
1402 : : /* Compute GID for two_phase transactions. */
1567 1403 : 20 : TwoPhaseTransactionGid(MySubscription->oid, prepare_data.xid,
1404 : : gid, sizeof(gid));
1405 : :
1406 : : /* There is no transaction when COMMIT PREPARED is called */
1407 : 20 : begin_replication_step();
1408 : :
1409 : : /*
1410 : : * Update origin state so we can restart streaming from correct position
1411 : : * in case of crash.
1412 : : */
1413 : 20 : replorigin_session_origin_lsn = prepare_data.end_lsn;
1414 : 20 : replorigin_session_origin_timestamp = prepare_data.commit_time;
1415 : :
1416 : 20 : FinishPreparedTransaction(gid, true);
1417 : 20 : end_replication_step();
1418 : 20 : CommitTransactionCommand();
1419 : 20 : pgstat_report_stat(false);
1420 : :
1023 1421 : 20 : store_flush_position(prepare_data.end_lsn, XactLastCommitEnd);
1567 1422 : 20 : in_remote_transaction = false;
1423 : :
1424 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 1425 :GNC 20 : ProcessSyncingRelations(prepare_data.end_lsn);
1426 : :
1316 akapila@postgresql.o 1427 :CBC 20 : clear_subscription_skip_lsn(prepare_data.end_lsn);
1428 : :
1567 1429 : 20 : pgstat_report_activity(STATE_IDLE, NULL);
1523 1430 : 20 : reset_apply_error_context_info();
1567 1431 : 20 : }
1432 : :
1433 : : /*
1434 : : * Handle a ROLLBACK PREPARED of a previously PREPARED TRANSACTION.
1435 : : *
1436 : : * Note that we don't need to wait here if the transaction was prepared in a
1437 : : * parallel apply worker. In that case, we have already waited for the prepare
1438 : : * to finish in apply_handle_stream_prepare() which will ensure all the
1439 : : * operations in that transaction have happened in the subscriber, so no
1440 : : * concurrent transaction can cause deadlock or transaction dependency issues.
1441 : : */
1442 : : static void
1443 : 5 : apply_handle_rollback_prepared(StringInfo s)
1444 : : {
1445 : : LogicalRepRollbackPreparedTxnData rollback_data;
1446 : : char gid[GIDSIZE];
1447 : :
1448 : 5 : logicalrep_read_rollback_prepared(s, &rollback_data);
1330 1449 : 5 : set_apply_error_context_xact(rollback_data.xid, rollback_data.rollback_end_lsn);
1450 : :
1451 : : /* Compute GID for two_phase transactions. */
1567 1452 : 5 : TwoPhaseTransactionGid(MySubscription->oid, rollback_data.xid,
1453 : : gid, sizeof(gid));
1454 : :
1455 : : /*
1456 : : * It is possible that we haven't received prepare because it occurred
1457 : : * before walsender reached a consistent point or the two_phase was still
1458 : : * not enabled by that time, so in such cases, we need to skip rollback
1459 : : * prepared.
1460 : : */
1461 [ + - ]: 5 : if (LookupGXact(gid, rollback_data.prepare_end_lsn,
1462 : : rollback_data.prepare_time))
1463 : : {
1464 : : /*
1465 : : * Update origin state so we can restart streaming from correct
1466 : : * position in case of crash.
1467 : : */
1468 : 5 : replorigin_session_origin_lsn = rollback_data.rollback_end_lsn;
1469 : 5 : replorigin_session_origin_timestamp = rollback_data.rollback_time;
1470 : :
1471 : : /* There is no transaction when ABORT/ROLLBACK PREPARED is called */
1472 : 5 : begin_replication_step();
1473 : 5 : FinishPreparedTransaction(gid, false);
1474 : 5 : end_replication_step();
1475 : 5 : CommitTransactionCommand();
1476 : :
1316 1477 : 5 : clear_subscription_skip_lsn(rollback_data.rollback_end_lsn);
1478 : : }
1479 : :
1567 1480 : 5 : pgstat_report_stat(false);
1481 : :
1482 : : /*
1483 : : * It is okay not to set the local_end LSN for the rollback of prepared
1484 : : * transaction because we always flush the WAL record for it. See
1485 : : * apply_handle_prepare.
1486 : : */
445 1487 : 5 : store_flush_position(rollback_data.rollback_end_lsn, InvalidXLogRecPtr);
1567 1488 : 5 : in_remote_transaction = false;
1489 : :
1490 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 1491 :GNC 5 : ProcessSyncingRelations(rollback_data.rollback_end_lsn);
1492 : :
1567 akapila@postgresql.o 1493 :CBC 5 : pgstat_report_activity(STATE_IDLE, NULL);
1523 1494 : 5 : reset_apply_error_context_info();
1567 1495 : 5 : }
1496 : :
1497 : : /*
1498 : : * Handle STREAM PREPARE.
1499 : : */
1500 : : static void
1546 1501 : 11 : apply_handle_stream_prepare(StringInfo s)
1502 : : {
1503 : : LogicalRepPreparedTxnData prepare_data;
1504 : : ParallelApplyWorkerInfo *winfo;
1505 : : TransApplyAction apply_action;
1506 : :
1507 : : /* Save the message before it is consumed. */
1023 1508 : 11 : StringInfoData original_msg = *s;
1509 : :
1546 1510 [ - + ]: 11 : if (in_streamed_transaction)
1546 akapila@postgresql.o 1511 [ # # ]:UBC 0 : ereport(ERROR,
1512 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1513 : : errmsg_internal("STREAM PREPARE message without STREAM STOP")));
1514 : :
1515 : : /* Tablesync should never receive prepare. */
1546 akapila@postgresql.o 1516 [ - + ]:CBC 11 : if (am_tablesync_worker())
1546 akapila@postgresql.o 1517 [ # # ]:UBC 0 : ereport(ERROR,
1518 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1519 : : errmsg_internal("tablesync worker received a STREAM PREPARE message")));
1520 : :
1546 akapila@postgresql.o 1521 :CBC 11 : logicalrep_read_stream_prepare(s, &prepare_data);
1330 1522 : 11 : set_apply_error_context_xact(prepare_data.xid, prepare_data.prepare_lsn);
1523 : :
1023 1524 : 11 : apply_action = get_transaction_apply_action(prepare_data.xid, &winfo);
1525 : :
1526 [ + + + + : 11 : switch (apply_action)
- ]
1527 : : {
1015 1528 : 5 : case TRANS_LEADER_APPLY:
1529 : :
1530 : : /*
1531 : : * The transaction has been serialized to file, so replay all the
1532 : : * spooled operations.
1533 : : */
1023 1534 : 5 : apply_spooled_messages(MyLogicalRepWorker->stream_fileset,
1535 : : prepare_data.xid, prepare_data.prepare_lsn);
1536 : :
1537 : : /* Mark the transaction as prepared. */
1538 : 5 : apply_handle_prepare_internal(&prepare_data);
1539 : :
1540 : 5 : CommitTransactionCommand();
1541 : :
1542 : : /*
1543 : : * It is okay not to set the local_end LSN for the prepare because
1544 : : * we always flush the prepare record. See apply_handle_prepare.
1545 : : */
445 1546 : 5 : store_flush_position(prepare_data.end_lsn, InvalidXLogRecPtr);
1547 : :
1023 1548 : 5 : in_remote_transaction = false;
1549 : :
1550 : : /* Unlink the files with serialized changes and subxact info. */
1551 : 5 : stream_cleanup_files(MyLogicalRepWorker->subid, prepare_data.xid);
1552 : :
1553 [ - + ]: 5 : elog(DEBUG1, "finished processing the STREAM PREPARE command");
1554 : 5 : break;
1555 : :
1556 : 2 : case TRANS_LEADER_SEND_TO_PARALLEL:
1557 [ - + ]: 2 : Assert(winfo);
1558 : :
1559 [ + - ]: 2 : if (pa_send_data(winfo, s->len, s->data))
1560 : : {
1561 : : /* Finish processing the streaming transaction. */
1562 : 2 : pa_xact_finish(winfo, prepare_data.end_lsn);
1563 : 2 : break;
1564 : : }
1565 : :
1566 : : /*
1567 : : * Switch to serialize mode when we are not able to send the
1568 : : * change to parallel apply worker.
1569 : : */
1023 akapila@postgresql.o 1570 :UBC 0 : pa_switch_to_partial_serialize(winfo, true);
1571 : :
1572 : : /* fall through */
1023 akapila@postgresql.o 1573 :CBC 1 : case TRANS_LEADER_PARTIAL_SERIALIZE:
1574 [ - + ]: 1 : Assert(winfo);
1575 : :
1576 : 1 : stream_open_and_write_change(prepare_data.xid,
1577 : : LOGICAL_REP_MSG_STREAM_PREPARE,
1578 : : &original_msg);
1579 : :
1580 : 1 : pa_set_fileset_state(winfo->shared, FS_SERIALIZE_DONE);
1581 : :
1582 : : /* Finish processing the streaming transaction. */
1583 : 1 : pa_xact_finish(winfo, prepare_data.end_lsn);
1584 : 1 : break;
1585 : :
1586 : 3 : case TRANS_PARALLEL_APPLY:
1587 : :
1588 : : /*
1589 : : * If the parallel apply worker is applying spooled messages then
1590 : : * close the file before preparing.
1591 : : */
1592 [ + + ]: 3 : if (stream_fd)
1593 : 1 : stream_close_file();
1594 : :
1595 : 3 : begin_replication_step();
1596 : :
1597 : : /* Mark the transaction as prepared. */
1598 : 3 : apply_handle_prepare_internal(&prepare_data);
1599 : :
1600 : 3 : end_replication_step();
1601 : :
1602 : 3 : CommitTransactionCommand();
1603 : :
1604 : : /*
1605 : : * It is okay not to set the local_end LSN for the prepare because
1606 : : * we always flush the prepare record. See apply_handle_prepare.
1607 : : */
445 1608 : 3 : MyParallelShared->last_commit_end = InvalidXLogRecPtr;
1609 : :
1023 1610 : 3 : pa_set_xact_state(MyParallelShared, PARALLEL_TRANS_FINISHED);
1611 : 3 : pa_unlock_transaction(MyParallelShared->xid, AccessExclusiveLock);
1612 : :
1613 : 3 : pa_reset_subtrans();
1614 : :
1615 [ + + ]: 3 : elog(DEBUG1, "finished processing the STREAM PREPARE command");
1616 : 3 : break;
1617 : :
1023 akapila@postgresql.o 1618 :UBC 0 : default:
1015 1619 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
1620 : : break;
1621 : : }
1622 : :
1023 akapila@postgresql.o 1623 :CBC 11 : pgstat_report_stat(false);
1624 : :
1625 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 1626 :GNC 11 : ProcessSyncingRelations(prepare_data.end_lsn);
1627 : :
1628 : : /*
1629 : : * Similar to prepare case, the subskiplsn could be left in a case of
1630 : : * server crash but it's okay. See the comments in apply_handle_prepare().
1631 : : */
1316 akapila@postgresql.o 1632 :CBC 11 : stop_skipping_changes();
1633 : 11 : clear_subscription_skip_lsn(prepare_data.prepare_lsn);
1634 : :
1546 1635 : 11 : pgstat_report_activity(STATE_IDLE, NULL);
1636 : :
1523 1637 : 11 : reset_apply_error_context_info();
1546 1638 : 11 : }
1639 : :
1640 : : /*
1641 : : * Handle ORIGIN message.
1642 : : *
1643 : : * TODO, support tracking of multiple origins
1644 : : */
1645 : : static void
3204 peter_e@gmx.net 1646 : 7 : apply_handle_origin(StringInfo s)
1647 : : {
1648 : : /*
1649 : : * ORIGIN message can only come inside streaming transaction or inside
1650 : : * remote transaction and before any actual writes.
1651 : : */
1881 akapila@postgresql.o 1652 [ + + ]: 7 : if (!in_streamed_transaction &&
1653 [ + - - + ]: 10 : (!in_remote_transaction ||
1654 [ - - ]: 5 : (IsTransactionState() && !am_tablesync_worker())))
3204 peter_e@gmx.net 1655 [ # # ]:UBC 0 : ereport(ERROR,
1656 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1657 : : errmsg_internal("ORIGIN message sent out of order")));
3204 peter_e@gmx.net 1658 :CBC 7 : }
1659 : :
1660 : : /*
1661 : : * Initialize fileset (if not already done).
1662 : : *
1663 : : * Create a new file when first_segment is true, otherwise open the existing
1664 : : * file.
1665 : : */
1666 : : void
1023 akapila@postgresql.o 1667 : 363 : stream_start_internal(TransactionId xid, bool first_segment)
1668 : : {
1669 : 363 : begin_replication_step();
1670 : :
1671 : : /*
1672 : : * Initialize the worker's stream_fileset if we haven't yet. This will be
1673 : : * used for the entire duration of the worker so create it in a permanent
1674 : : * context. We create this on the very first streaming message from any
1675 : : * transaction and then use it for this and other streaming transactions.
1676 : : * Now, we could create a fileset at the start of the worker as well but
1677 : : * then we won't be sure that it will ever be used.
1678 : : */
1679 [ + + ]: 363 : if (!MyLogicalRepWorker->stream_fileset)
1680 : : {
1681 : : MemoryContext oldctx;
1682 : :
1683 : 14 : oldctx = MemoryContextSwitchTo(ApplyContext);
1684 : :
1685 : 14 : MyLogicalRepWorker->stream_fileset = palloc(sizeof(FileSet));
1686 : 14 : FileSetInit(MyLogicalRepWorker->stream_fileset);
1687 : :
1688 : 14 : MemoryContextSwitchTo(oldctx);
1689 : : }
1690 : :
1691 : : /* Open the spool file for this transaction. */
1692 : 363 : stream_open_file(MyLogicalRepWorker->subid, xid, first_segment);
1693 : :
1694 : : /* If this is not the first segment, open existing subxact file. */
1695 [ + + ]: 363 : if (!first_segment)
1696 : 331 : subxact_info_read(MyLogicalRepWorker->subid, xid);
1697 : :
1698 : 363 : end_replication_step();
1699 : 363 : }
1700 : :
1701 : : /*
1702 : : * Handle STREAM START message.
1703 : : */
1704 : : static void
1881 1705 : 857 : apply_handle_stream_start(StringInfo s)
1706 : : {
1707 : : bool first_segment;
1708 : : ParallelApplyWorkerInfo *winfo;
1709 : : TransApplyAction apply_action;
1710 : :
1711 : : /* Save the message before it is consumed. */
1023 1712 : 857 : StringInfoData original_msg = *s;
1713 : :
1599 tgl@sss.pgh.pa.us 1714 [ - + ]: 857 : if (in_streamed_transaction)
1599 tgl@sss.pgh.pa.us 1715 [ # # ]:UBC 0 : ereport(ERROR,
1716 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1717 : : errmsg_internal("duplicate STREAM START message")));
1718 : :
1719 : : /* There must not be an active streaming transaction. */
1015 akapila@postgresql.o 1720 [ - + ]:CBC 857 : Assert(!TransactionIdIsValid(stream_xid));
1721 : :
1722 : : /* notify handle methods we're processing a remote transaction */
1881 1723 : 857 : in_streamed_transaction = true;
1724 : :
1725 : : /* extract XID of the top-level transaction */
1726 : 857 : stream_xid = logicalrep_read_stream_start(s, &first_segment);
1727 : :
1599 tgl@sss.pgh.pa.us 1728 [ - + ]: 857 : if (!TransactionIdIsValid(stream_xid))
1599 tgl@sss.pgh.pa.us 1729 [ # # ]:UBC 0 : ereport(ERROR,
1730 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1731 : : errmsg_internal("invalid transaction ID in streamed replication transaction")));
1732 : :
1330 akapila@postgresql.o 1733 :CBC 857 : set_apply_error_context_xact(stream_xid, InvalidXLogRecPtr);
1734 : :
1735 : : /* Try to allocate a worker for the streaming transaction. */
1023 1736 [ + + ]: 857 : if (first_segment)
1737 : 82 : pa_allocate_worker(stream_xid);
1738 : :
1739 : 857 : apply_action = get_transaction_apply_action(stream_xid, &winfo);
1740 : :
1741 [ + + + + : 857 : switch (apply_action)
- ]
1742 : : {
1743 : 343 : case TRANS_LEADER_SERIALIZE:
1744 : :
1745 : : /*
1746 : : * Function stream_start_internal starts a transaction. This
1747 : : * transaction will be committed on the stream stop unless it is a
1748 : : * tablesync worker in which case it will be committed after
1749 : : * processing all the messages. We need this transaction for
1750 : : * handling the BufFile, used for serializing the streaming data
1751 : : * and subxact info.
1752 : : */
1753 : 343 : stream_start_internal(stream_xid, first_segment);
1754 : 343 : break;
1755 : :
1756 : 251 : case TRANS_LEADER_SEND_TO_PARALLEL:
1757 [ - + ]: 251 : Assert(winfo);
1758 : :
1759 : : /*
1760 : : * Once we start serializing the changes, the parallel apply
1761 : : * worker will wait for the leader to release the stream lock
1762 : : * until the end of the transaction. So, we don't need to release
1763 : : * the lock or increment the stream count in that case.
1764 : : */
1765 [ + + ]: 251 : if (pa_send_data(winfo, s->len, s->data))
1766 : : {
1767 : : /*
1768 : : * Unlock the shared object lock so that the parallel apply
1769 : : * worker can continue to receive changes.
1770 : : */
1771 [ + + ]: 247 : if (!first_segment)
1772 : 224 : pa_unlock_stream(winfo->shared->xid, AccessExclusiveLock);
1773 : :
1774 : : /*
1775 : : * Increment the number of streaming blocks waiting to be
1776 : : * processed by parallel apply worker.
1777 : : */
1778 : 247 : pg_atomic_add_fetch_u32(&winfo->shared->pending_stream_count, 1);
1779 : :
1780 : : /* Cache the parallel apply worker for this transaction. */
1781 : 247 : pa_set_stream_apply_worker(winfo);
1782 : 247 : break;
1783 : : }
1784 : :
1785 : : /*
1786 : : * Switch to serialize mode when we are not able to send the
1787 : : * change to parallel apply worker.
1788 : : */
1789 : 4 : pa_switch_to_partial_serialize(winfo, !first_segment);
1790 : :
1791 : : /* fall through */
1792 : 15 : case TRANS_LEADER_PARTIAL_SERIALIZE:
1793 [ - + ]: 15 : Assert(winfo);
1794 : :
1795 : : /*
1796 : : * Open the spool file unless it was already opened when switching
1797 : : * to serialize mode. The transaction started in
1798 : : * stream_start_internal will be committed on the stream stop.
1799 : : */
1800 [ + + ]: 15 : if (apply_action != TRANS_LEADER_SEND_TO_PARALLEL)
1801 : 11 : stream_start_internal(stream_xid, first_segment);
1802 : :
1803 : 15 : stream_write_change(LOGICAL_REP_MSG_STREAM_START, &original_msg);
1804 : :
1805 : : /* Cache the parallel apply worker for this transaction. */
1806 : 15 : pa_set_stream_apply_worker(winfo);
1807 : 15 : break;
1808 : :
1809 : 252 : case TRANS_PARALLEL_APPLY:
1810 [ + + ]: 252 : if (first_segment)
1811 : : {
1812 : : /* Hold the lock until the end of the transaction. */
1813 : 27 : pa_lock_transaction(MyParallelShared->xid, AccessExclusiveLock);
1814 : 27 : pa_set_xact_state(MyParallelShared, PARALLEL_TRANS_STARTED);
1815 : :
1816 : : /*
1817 : : * Signal the leader apply worker, as it may be waiting for
1818 : : * us.
1819 : : */
1820 : 27 : logicalrep_worker_wakeup(MyLogicalRepWorker->subid, InvalidOid);
1821 : : }
1822 : :
1823 : 252 : parallel_stream_nchanges = 0;
1824 : 252 : break;
1825 : :
1023 akapila@postgresql.o 1826 :UBC 0 : default:
1015 1827 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
1828 : : break;
1829 : : }
1830 : :
1023 akapila@postgresql.o 1831 :CBC 857 : pgstat_report_activity(STATE_RUNNING, NULL);
1881 1832 : 857 : }
1833 : :
1834 : : /*
1835 : : * Update the information about subxacts and close the file.
1836 : : *
1837 : : * This function should be called when the stream_start_internal function has
1838 : : * been called.
1839 : : */
1840 : : void
1023 1841 : 363 : stream_stop_internal(TransactionId xid)
1842 : : {
1843 : : /*
1844 : : * Serialize information about subxacts for the toplevel transaction, then
1845 : : * close the stream messages spool file.
1846 : : */
1847 : 363 : subxact_info_write(MyLogicalRepWorker->subid, xid);
1881 1848 : 363 : stream_close_file();
1849 : :
1850 : : /* We must be in a valid transaction state */
1851 [ - + ]: 363 : Assert(IsTransactionState());
1852 : :
1853 : : /* Commit the per-stream transaction */
1719 1854 : 363 : CommitTransactionCommand();
1855 : :
1856 : : /* Reset per-stream context */
1881 1857 : 363 : MemoryContextReset(LogicalStreamingContext);
1858 : 363 : }
1859 : :
1860 : : /*
1861 : : * Handle STREAM STOP message.
1862 : : */
1863 : : static void
1023 1864 : 856 : apply_handle_stream_stop(StringInfo s)
1865 : : {
1866 : : ParallelApplyWorkerInfo *winfo;
1867 : : TransApplyAction apply_action;
1868 : :
1869 [ - + ]: 856 : if (!in_streamed_transaction)
1599 tgl@sss.pgh.pa.us 1870 [ # # ]:UBC 0 : ereport(ERROR,
1871 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
1872 : : errmsg_internal("STREAM STOP message without STREAM START")));
1873 : :
1023 akapila@postgresql.o 1874 :CBC 856 : apply_action = get_transaction_apply_action(stream_xid, &winfo);
1875 : :
1876 [ + + + + : 856 : switch (apply_action)
- ]
1877 : : {
1878 : 343 : case TRANS_LEADER_SERIALIZE:
1879 : 343 : stream_stop_internal(stream_xid);
1880 : 343 : break;
1881 : :
1882 : 247 : case TRANS_LEADER_SEND_TO_PARALLEL:
1883 [ - + ]: 247 : Assert(winfo);
1884 : :
1885 : : /*
1886 : : * Lock before sending the STREAM_STOP message so that the leader
1887 : : * can hold the lock first and the parallel apply worker will wait
1888 : : * for leader to release the lock. See Locking Considerations atop
1889 : : * applyparallelworker.c.
1890 : : */
1891 : 247 : pa_lock_stream(winfo->shared->xid, AccessExclusiveLock);
1892 : :
1893 [ + - ]: 247 : if (pa_send_data(winfo, s->len, s->data))
1894 : : {
1895 : 247 : pa_set_stream_apply_worker(NULL);
1896 : 247 : break;
1897 : : }
1898 : :
1899 : : /*
1900 : : * Switch to serialize mode when we are not able to send the
1901 : : * change to parallel apply worker.
1902 : : */
1023 akapila@postgresql.o 1903 :UBC 0 : pa_switch_to_partial_serialize(winfo, true);
1904 : :
1905 : : /* fall through */
1023 akapila@postgresql.o 1906 :CBC 15 : case TRANS_LEADER_PARTIAL_SERIALIZE:
1907 : 15 : stream_write_change(LOGICAL_REP_MSG_STREAM_STOP, s);
1908 : 15 : stream_stop_internal(stream_xid);
1909 : 15 : pa_set_stream_apply_worker(NULL);
1910 : 15 : break;
1911 : :
1912 : 251 : case TRANS_PARALLEL_APPLY:
1913 [ + + ]: 251 : elog(DEBUG1, "applied %u changes in the streaming chunk",
1914 : : parallel_stream_nchanges);
1915 : :
1916 : : /*
1917 : : * By the time parallel apply worker is processing the changes in
1918 : : * the current streaming block, the leader apply worker may have
1919 : : * sent multiple streaming blocks. This can lead to parallel apply
1920 : : * worker start waiting even when there are more chunk of streams
1921 : : * in the queue. So, try to lock only if there is no message left
1922 : : * in the queue. See Locking Considerations atop
1923 : : * applyparallelworker.c.
1924 : : *
1925 : : * Note that here we have a race condition where we can start
1926 : : * waiting even when there are pending streaming chunks. This can
1927 : : * happen if the leader sends another streaming block and acquires
1928 : : * the stream lock again after the parallel apply worker checks
1929 : : * that there is no pending streaming block and before it actually
1930 : : * starts waiting on a lock. We can handle this case by not
1931 : : * allowing the leader to increment the stream block count during
1932 : : * the time parallel apply worker acquires the lock but it is not
1933 : : * clear whether that is worth the complexity.
1934 : : *
1935 : : * Now, if this missed chunk contains rollback to savepoint, then
1936 : : * there is a risk of deadlock which probably shouldn't happen
1937 : : * after restart.
1938 : : */
1939 : 251 : pa_decr_and_wait_stream_block();
1940 : 249 : break;
1941 : :
1023 akapila@postgresql.o 1942 :UBC 0 : default:
1015 1943 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
1944 : : break;
1945 : : }
1946 : :
1023 akapila@postgresql.o 1947 :CBC 854 : in_streamed_transaction = false;
1015 1948 : 854 : stream_xid = InvalidTransactionId;
1949 : :
1950 : : /*
1951 : : * The parallel apply worker could be in a transaction in which case we
1952 : : * need to report the state as STATE_IDLEINTRANSACTION.
1953 : : */
1023 1954 [ + + ]: 854 : if (IsTransactionOrTransactionBlock())
1955 : 249 : pgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);
1956 : : else
1957 : 605 : pgstat_report_activity(STATE_IDLE, NULL);
1958 : :
1959 : 854 : reset_apply_error_context_info();
1960 : 854 : }
1961 : :
1962 : : /*
1963 : : * Helper function to handle STREAM ABORT message when the transaction was
1964 : : * serialized to file.
1965 : : */
1966 : : static void
1967 : 14 : stream_abort_internal(TransactionId xid, TransactionId subxid)
1968 : : {
1969 : : /*
1970 : : * If the two XIDs are the same, it's in fact abort of toplevel xact, so
1971 : : * just delete the files with serialized info.
1972 : : */
1881 1973 [ + + ]: 14 : if (xid == subxid)
1974 : 1 : stream_cleanup_files(MyLogicalRepWorker->subid, xid);
1975 : : else
1976 : : {
1977 : : /*
1978 : : * OK, so it's a subxact. We need to read the subxact file for the
1979 : : * toplevel transaction, determine the offset tracked for the subxact,
1980 : : * and truncate the file with changes. We also remove the subxacts
1981 : : * with higher offsets (or rather higher XIDs).
1982 : : *
1983 : : * We intentionally scan the array from the tail, because we're likely
1984 : : * aborting a change for the most recent subtransactions.
1985 : : *
1986 : : * We can't use the binary search here as subxact XIDs won't
1987 : : * necessarily arrive in sorted order, consider the case where we have
1988 : : * released the savepoint for multiple subtransactions and then
1989 : : * performed rollback to savepoint for one of the earlier
1990 : : * sub-transaction.
1991 : : */
1992 : : int64 i;
1993 : : int64 subidx;
1994 : : BufFile *fd;
1995 : 13 : bool found = false;
1996 : : char path[MAXPGPATH];
1997 : :
1998 : 13 : subidx = -1;
1601 tgl@sss.pgh.pa.us 1999 : 13 : begin_replication_step();
1881 akapila@postgresql.o 2000 : 13 : subxact_info_read(MyLogicalRepWorker->subid, xid);
2001 : :
2002 [ + + ]: 15 : for (i = subxact_data.nsubxacts; i > 0; i--)
2003 : : {
2004 [ + + ]: 11 : if (subxact_data.subxacts[i - 1].xid == subxid)
2005 : : {
2006 : 9 : subidx = (i - 1);
2007 : 9 : found = true;
2008 : 9 : break;
2009 : : }
2010 : : }
2011 : :
2012 : : /*
2013 : : * If it's an empty sub-transaction then we will not find the subxid
2014 : : * here so just cleanup the subxact info and return.
2015 : : */
2016 [ + + ]: 13 : if (!found)
2017 : : {
2018 : : /* Cleanup the subxact info */
2019 : 4 : cleanup_subxact_info();
1601 tgl@sss.pgh.pa.us 2020 : 4 : end_replication_step();
1719 akapila@postgresql.o 2021 : 4 : CommitTransactionCommand();
1881 2022 : 4 : return;
2023 : : }
2024 : :
2025 : : /* open the changes file */
1023 2026 : 9 : changes_filename(path, MyLogicalRepWorker->subid, xid);
2027 : 9 : fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path,
2028 : : O_RDWR, false);
2029 : :
2030 : : /* OK, truncate the file at the right offset */
2031 : 9 : BufFileTruncateFileSet(fd, subxact_data.subxacts[subidx].fileno,
2032 : 9 : subxact_data.subxacts[subidx].offset);
2033 : 9 : BufFileClose(fd);
2034 : :
2035 : : /* discard the subxacts added later */
2036 : 9 : subxact_data.nsubxacts = subidx;
2037 : :
2038 : : /* write the updated subxact list */
2039 : 9 : subxact_info_write(MyLogicalRepWorker->subid, xid);
2040 : :
2041 : 9 : end_replication_step();
2042 : 9 : CommitTransactionCommand();
2043 : : }
2044 : : }
2045 : :
2046 : : /*
2047 : : * Handle STREAM ABORT message.
2048 : : */
2049 : : static void
2050 : 38 : apply_handle_stream_abort(StringInfo s)
2051 : : {
2052 : : TransactionId xid;
2053 : : TransactionId subxid;
2054 : : LogicalRepStreamAbortData abort_data;
2055 : : ParallelApplyWorkerInfo *winfo;
2056 : : TransApplyAction apply_action;
2057 : :
2058 : : /* Save the message before it is consumed. */
2059 : 38 : StringInfoData original_msg = *s;
2060 : : bool toplevel_xact;
2061 : :
2062 [ - + ]: 38 : if (in_streamed_transaction)
1023 akapila@postgresql.o 2063 [ # # ]:UBC 0 : ereport(ERROR,
2064 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
2065 : : errmsg_internal("STREAM ABORT message without STREAM STOP")));
2066 : :
2067 : : /* We receive abort information only when we can apply in parallel. */
1023 akapila@postgresql.o 2068 :CBC 38 : logicalrep_read_stream_abort(s, &abort_data,
2069 : 38 : MyLogicalRepWorker->parallel_apply);
2070 : :
2071 : 38 : xid = abort_data.xid;
2072 : 38 : subxid = abort_data.subxid;
2073 : 38 : toplevel_xact = (xid == subxid);
2074 : :
2075 : 38 : set_apply_error_context_xact(subxid, abort_data.abort_lsn);
2076 : :
2077 : 38 : apply_action = get_transaction_apply_action(xid, &winfo);
2078 : :
2079 [ + + + + : 38 : switch (apply_action)
- ]
2080 : : {
1015 2081 : 14 : case TRANS_LEADER_APPLY:
2082 : :
2083 : : /*
2084 : : * We are in the leader apply worker and the transaction has been
2085 : : * serialized to file.
2086 : : */
1023 2087 : 14 : stream_abort_internal(xid, subxid);
2088 : :
2089 [ - + ]: 14 : elog(DEBUG1, "finished processing the STREAM ABORT command");
2090 : 14 : break;
2091 : :
2092 : 10 : case TRANS_LEADER_SEND_TO_PARALLEL:
2093 [ - + ]: 10 : Assert(winfo);
2094 : :
2095 : : /*
2096 : : * For the case of aborting the subtransaction, we increment the
2097 : : * number of streaming blocks and take the lock again before
2098 : : * sending the STREAM_ABORT to ensure that the parallel apply
2099 : : * worker will wait on the lock for the next set of changes after
2100 : : * processing the STREAM_ABORT message if it is not already
2101 : : * waiting for STREAM_STOP message.
2102 : : *
2103 : : * It is important to perform this locking before sending the
2104 : : * STREAM_ABORT message so that the leader can hold the lock first
2105 : : * and the parallel apply worker will wait for the leader to
2106 : : * release the lock. This is the same as what we do in
2107 : : * apply_handle_stream_stop. See Locking Considerations atop
2108 : : * applyparallelworker.c.
2109 : : */
2110 [ + + ]: 10 : if (!toplevel_xact)
2111 : : {
2112 : 9 : pa_unlock_stream(xid, AccessExclusiveLock);
2113 : 9 : pg_atomic_add_fetch_u32(&winfo->shared->pending_stream_count, 1);
2114 : 9 : pa_lock_stream(xid, AccessExclusiveLock);
2115 : : }
2116 : :
2117 [ + - ]: 10 : if (pa_send_data(winfo, s->len, s->data))
2118 : : {
2119 : : /*
2120 : : * Unlike STREAM_COMMIT and STREAM_PREPARE, we don't need to
2121 : : * wait here for the parallel apply worker to finish as that
2122 : : * is not required to maintain the commit order and won't have
2123 : : * the risk of failures due to transaction dependencies and
2124 : : * deadlocks. However, it is possible that before the parallel
2125 : : * worker finishes and we clear the worker info, the xid
2126 : : * wraparound happens on the upstream and a new transaction
2127 : : * with the same xid can appear and that can lead to duplicate
2128 : : * entries in ParallelApplyTxnHash. Yet another problem could
2129 : : * be that we may have serialized the changes in partial
2130 : : * serialize mode and the file containing xact changes may
2131 : : * already exist, and after xid wraparound trying to create
2132 : : * the file for the same xid can lead to an error. To avoid
2133 : : * these problems, we decide to wait for the aborts to finish.
2134 : : *
2135 : : * Note, it is okay to not update the flush location position
2136 : : * for aborts as in worst case that means such a transaction
2137 : : * won't be sent again after restart.
2138 : : */
2139 [ + + ]: 10 : if (toplevel_xact)
2140 : 1 : pa_xact_finish(winfo, InvalidXLogRecPtr);
2141 : :
2142 : 10 : break;
2143 : : }
2144 : :
2145 : : /*
2146 : : * Switch to serialize mode when we are not able to send the
2147 : : * change to parallel apply worker.
2148 : : */
1023 akapila@postgresql.o 2149 :UBC 0 : pa_switch_to_partial_serialize(winfo, true);
2150 : :
2151 : : /* fall through */
1023 akapila@postgresql.o 2152 :CBC 2 : case TRANS_LEADER_PARTIAL_SERIALIZE:
2153 [ - + ]: 2 : Assert(winfo);
2154 : :
2155 : : /*
2156 : : * Parallel apply worker might have applied some changes, so write
2157 : : * the STREAM_ABORT message so that it can rollback the
2158 : : * subtransaction if needed.
2159 : : */
2160 : 2 : stream_open_and_write_change(xid, LOGICAL_REP_MSG_STREAM_ABORT,
2161 : : &original_msg);
2162 : :
2163 [ + + ]: 2 : if (toplevel_xact)
2164 : : {
2165 : 1 : pa_set_fileset_state(winfo->shared, FS_SERIALIZE_DONE);
2166 : 1 : pa_xact_finish(winfo, InvalidXLogRecPtr);
2167 : : }
2168 : 2 : break;
2169 : :
2170 : 12 : case TRANS_PARALLEL_APPLY:
2171 : :
2172 : : /*
2173 : : * If the parallel apply worker is applying spooled messages then
2174 : : * close the file before aborting.
2175 : : */
2176 [ + + + + ]: 12 : if (toplevel_xact && stream_fd)
2177 : 1 : stream_close_file();
2178 : :
2179 : 12 : pa_stream_abort(&abort_data);
2180 : :
2181 : : /*
2182 : : * We need to wait after processing rollback to savepoint for the
2183 : : * next set of changes.
2184 : : *
2185 : : * We have a race condition here due to which we can start waiting
2186 : : * here when there are more chunk of streams in the queue. See
2187 : : * apply_handle_stream_stop.
2188 : : */
2189 [ + + ]: 12 : if (!toplevel_xact)
2190 : 10 : pa_decr_and_wait_stream_block();
2191 : :
2192 [ + + ]: 12 : elog(DEBUG1, "finished processing the STREAM ABORT command");
2193 : 12 : break;
2194 : :
1023 akapila@postgresql.o 2195 :UBC 0 : default:
1015 2196 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
2197 : : break;
2198 : : }
2199 : :
1023 akapila@postgresql.o 2200 :CBC 38 : reset_apply_error_context_info();
2201 : 38 : }
2202 : :
2203 : : /*
2204 : : * Ensure that the passed location is fileset's end.
2205 : : */
2206 : : static void
2207 : 4 : ensure_last_message(FileSet *stream_fileset, TransactionId xid, int fileno,
2208 : : off_t offset)
2209 : : {
2210 : : char path[MAXPGPATH];
2211 : : BufFile *fd;
2212 : : int last_fileno;
2213 : : off_t last_offset;
2214 : :
2215 [ - + ]: 4 : Assert(!IsTransactionState());
2216 : :
2217 : 4 : begin_replication_step();
2218 : :
2219 : 4 : changes_filename(path, MyLogicalRepWorker->subid, xid);
2220 : :
2221 : 4 : fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);
2222 : :
2223 : 4 : BufFileSeek(fd, 0, 0, SEEK_END);
2224 : 4 : BufFileTell(fd, &last_fileno, &last_offset);
2225 : :
2226 : 4 : BufFileClose(fd);
2227 : :
2228 : 4 : end_replication_step();
2229 : :
2230 [ + - - + ]: 4 : if (last_fileno != fileno || last_offset != offset)
1023 akapila@postgresql.o 2231 [ # # ]:UBC 0 : elog(ERROR, "unexpected message left in streaming transaction's changes file \"%s\"",
2232 : : path);
1881 akapila@postgresql.o 2233 :CBC 4 : }
2234 : :
2235 : : /*
2236 : : * Common spoolfile processing.
2237 : : */
2238 : : void
1023 2239 : 31 : apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,
2240 : : XLogRecPtr lsn)
2241 : : {
2242 : : int nchanges;
2243 : : char path[MAXPGPATH];
1881 2244 : 31 : char *buffer = NULL;
2245 : : MemoryContext oldcxt;
2246 : : ResourceOwner oldowner;
2247 : : int fileno;
2248 : : off_t offset;
2249 : :
1023 2250 [ + + ]: 31 : if (!am_parallel_apply_worker())
2251 : 27 : maybe_start_skipping_changes(lsn);
2252 : :
2253 : : /* Make sure we have an open transaction */
1601 tgl@sss.pgh.pa.us 2254 : 31 : begin_replication_step();
2255 : :
2256 : : /*
2257 : : * Allocate file handle and memory required to process all the messages in
2258 : : * TopTransactionContext to avoid them getting reset after each message is
2259 : : * processed.
2260 : : */
1881 akapila@postgresql.o 2261 : 31 : oldcxt = MemoryContextSwitchTo(TopTransactionContext);
2262 : :
2263 : : /* Open the spool file for the committed/prepared transaction */
2264 : 31 : changes_filename(path, MyLogicalRepWorker->subid, xid);
2265 [ - + ]: 31 : elog(DEBUG1, "replaying changes from file \"%s\"", path);
2266 : :
2267 : : /*
2268 : : * Make sure the file is owned by the toplevel transaction so that the
2269 : : * file will not be accidentally closed when aborting a subtransaction.
2270 : : */
1023 2271 : 31 : oldowner = CurrentResourceOwner;
2272 : 31 : CurrentResourceOwner = TopTransactionResourceOwner;
2273 : :
2274 : 31 : stream_fd = BufFileOpenFileSet(stream_fileset, path, O_RDONLY, false);
2275 : :
2276 : 31 : CurrentResourceOwner = oldowner;
2277 : :
1881 2278 : 31 : buffer = palloc(BLCKSZ);
2279 : :
2280 : 31 : MemoryContextSwitchTo(oldcxt);
2281 : :
1552 2282 : 31 : remote_final_lsn = lsn;
2283 : :
2284 : : /*
2285 : : * Make sure the handle apply_dispatch methods are aware we're in a remote
2286 : : * transaction.
2287 : : */
1881 2288 : 31 : in_remote_transaction = true;
2289 : 31 : pgstat_report_activity(STATE_RUNNING, NULL);
2290 : :
1601 tgl@sss.pgh.pa.us 2291 : 31 : end_replication_step();
2292 : :
2293 : : /*
2294 : : * Read the entries one by one and pass them through the same logic as in
2295 : : * apply_dispatch.
2296 : : */
1881 akapila@postgresql.o 2297 : 31 : nchanges = 0;
2298 : : while (true)
2299 : 88469 : {
2300 : : StringInfoData s2;
2301 : : size_t nbytes;
2302 : : int len;
2303 : :
2304 [ - + ]: 88500 : CHECK_FOR_INTERRUPTS();
2305 : :
2306 : : /* read length of the on-disk record */
1016 peter@eisentraut.org 2307 : 88500 : nbytes = BufFileReadMaybeEOF(stream_fd, &len, sizeof(len), true);
2308 : :
2309 : : /* have we reached end of the file? */
1881 akapila@postgresql.o 2310 [ + + ]: 88500 : if (nbytes == 0)
2311 : 26 : break;
2312 : :
2313 : : /* do we have a correct length? */
1599 tgl@sss.pgh.pa.us 2314 [ - + ]: 88474 : if (len <= 0)
1599 tgl@sss.pgh.pa.us 2315 [ # # ]:UBC 0 : elog(ERROR, "incorrect length %d in streaming transaction's changes file \"%s\"",
2316 : : len, path);
2317 : :
2318 : : /* make sure we have sufficiently large buffer */
1881 akapila@postgresql.o 2319 :CBC 88474 : buffer = repalloc(buffer, len);
2320 : :
2321 : : /* and finally read the data into the buffer */
1016 peter@eisentraut.org 2322 : 88474 : BufFileReadExact(stream_fd, buffer, len);
2323 : :
1023 akapila@postgresql.o 2324 : 88474 : BufFileTell(stream_fd, &fileno, &offset);
2325 : :
2326 : : /* init a stringinfo using the buffer and call apply_dispatch */
721 drowley@postgresql.o 2327 : 88474 : initReadOnlyStringInfo(&s2, buffer, len);
2328 : :
2329 : : /* Ensure we are reading the data into our memory context. */
1881 akapila@postgresql.o 2330 : 88474 : oldcxt = MemoryContextSwitchTo(ApplyMessageContext);
2331 : :
2332 : 88474 : apply_dispatch(&s2);
2333 : :
2334 : 88473 : MemoryContextReset(ApplyMessageContext);
2335 : :
2336 : 88473 : MemoryContextSwitchTo(oldcxt);
2337 : :
2338 : 88473 : nchanges++;
2339 : :
2340 : : /*
2341 : : * It is possible the file has been closed because we have processed
2342 : : * the transaction end message like stream_commit in which case that
2343 : : * must be the last message.
2344 : : */
1023 2345 [ + + ]: 88473 : if (!stream_fd)
2346 : : {
2347 : 4 : ensure_last_message(stream_fileset, xid, fileno, offset);
2348 : 4 : break;
2349 : : }
2350 : :
1881 2351 [ + + ]: 88469 : if (nchanges % 1000 == 0)
1599 tgl@sss.pgh.pa.us 2352 [ - + ]: 83 : elog(DEBUG1, "replayed %d changes from file \"%s\"",
2353 : : nchanges, path);
2354 : : }
2355 : :
1023 akapila@postgresql.o 2356 [ + + ]: 30 : if (stream_fd)
2357 : 26 : stream_close_file();
2358 : :
1881 2359 [ - + ]: 30 : elog(DEBUG1, "replayed %d (all) changes from file \"%s\"",
2360 : : nchanges, path);
2361 : :
1552 2362 : 30 : return;
2363 : : }
2364 : :
2365 : : /*
2366 : : * Handle STREAM COMMIT message.
2367 : : */
2368 : : static void
2369 : 61 : apply_handle_stream_commit(StringInfo s)
2370 : : {
2371 : : TransactionId xid;
2372 : : LogicalRepCommitData commit_data;
2373 : : ParallelApplyWorkerInfo *winfo;
2374 : : TransApplyAction apply_action;
2375 : :
2376 : : /* Save the message before it is consumed. */
1023 2377 : 61 : StringInfoData original_msg = *s;
2378 : :
1552 2379 [ - + ]: 61 : if (in_streamed_transaction)
1552 akapila@postgresql.o 2380 [ # # ]:UBC 0 : ereport(ERROR,
2381 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
2382 : : errmsg_internal("STREAM COMMIT message without STREAM STOP")));
2383 : :
1552 akapila@postgresql.o 2384 :CBC 61 : xid = logicalrep_read_stream_commit(s, &commit_data);
1330 2385 : 61 : set_apply_error_context_xact(xid, commit_data.commit_lsn);
2386 : :
1023 2387 : 61 : apply_action = get_transaction_apply_action(xid, &winfo);
2388 : :
2389 [ + + + + : 61 : switch (apply_action)
- ]
2390 : : {
1015 2391 : 22 : case TRANS_LEADER_APPLY:
2392 : :
2393 : : /*
2394 : : * The transaction has been serialized to file, so replay all the
2395 : : * spooled operations.
2396 : : */
1023 2397 : 22 : apply_spooled_messages(MyLogicalRepWorker->stream_fileset, xid,
2398 : : commit_data.commit_lsn);
2399 : :
2400 : 21 : apply_handle_commit_internal(&commit_data);
2401 : :
2402 : : /* Unlink the files with serialized changes and subxact info. */
2403 : 21 : stream_cleanup_files(MyLogicalRepWorker->subid, xid);
2404 : :
2405 [ - + ]: 21 : elog(DEBUG1, "finished processing the STREAM COMMIT command");
2406 : 21 : break;
2407 : :
2408 : 18 : case TRANS_LEADER_SEND_TO_PARALLEL:
2409 [ - + ]: 18 : Assert(winfo);
2410 : :
2411 [ + - ]: 18 : if (pa_send_data(winfo, s->len, s->data))
2412 : : {
2413 : : /* Finish processing the streaming transaction. */
2414 : 18 : pa_xact_finish(winfo, commit_data.end_lsn);
2415 : 17 : break;
2416 : : }
2417 : :
2418 : : /*
2419 : : * Switch to serialize mode when we are not able to send the
2420 : : * change to parallel apply worker.
2421 : : */
1023 akapila@postgresql.o 2422 :UBC 0 : pa_switch_to_partial_serialize(winfo, true);
2423 : :
2424 : : /* fall through */
1023 akapila@postgresql.o 2425 :CBC 2 : case TRANS_LEADER_PARTIAL_SERIALIZE:
2426 [ - + ]: 2 : Assert(winfo);
2427 : :
2428 : 2 : stream_open_and_write_change(xid, LOGICAL_REP_MSG_STREAM_COMMIT,
2429 : : &original_msg);
2430 : :
2431 : 2 : pa_set_fileset_state(winfo->shared, FS_SERIALIZE_DONE);
2432 : :
2433 : : /* Finish processing the streaming transaction. */
2434 : 2 : pa_xact_finish(winfo, commit_data.end_lsn);
2435 : 2 : break;
2436 : :
2437 : 19 : case TRANS_PARALLEL_APPLY:
2438 : :
2439 : : /*
2440 : : * If the parallel apply worker is applying spooled messages then
2441 : : * close the file before committing.
2442 : : */
2443 [ + + ]: 19 : if (stream_fd)
2444 : 2 : stream_close_file();
2445 : :
2446 : 19 : apply_handle_commit_internal(&commit_data);
2447 : :
2448 : 19 : MyParallelShared->last_commit_end = XactLastCommitEnd;
2449 : :
2450 : : /*
2451 : : * It is important to set the transaction state as finished before
2452 : : * releasing the lock. See pa_wait_for_xact_finish.
2453 : : */
2454 : 19 : pa_set_xact_state(MyParallelShared, PARALLEL_TRANS_FINISHED);
2455 : 19 : pa_unlock_transaction(xid, AccessExclusiveLock);
2456 : :
2457 : 19 : pa_reset_subtrans();
2458 : :
2459 [ + + ]: 19 : elog(DEBUG1, "finished processing the STREAM COMMIT command");
2460 : 19 : break;
2461 : :
1023 akapila@postgresql.o 2462 :UBC 0 : default:
1015 2463 [ # # ]: 0 : elog(ERROR, "unexpected apply action: %d", (int) apply_action);
2464 : : break;
2465 : : }
2466 : :
2467 : : /* Process any tables that are being synchronized in parallel. */
12 akapila@postgresql.o 2468 :GNC 59 : ProcessSyncingRelations(commit_data.end_lsn);
2469 : :
1881 akapila@postgresql.o 2470 :CBC 59 : pgstat_report_activity(STATE_IDLE, NULL);
2471 : :
1523 2472 : 59 : reset_apply_error_context_info();
1881 2473 : 59 : }
2474 : :
2475 : : /*
2476 : : * Helper function for apply_handle_commit and apply_handle_stream_commit.
2477 : : */
2478 : : static void
1551 2479 : 485 : apply_handle_commit_internal(LogicalRepCommitData *commit_data)
2480 : : {
1316 2481 [ + + ]: 485 : if (is_skipping_changes())
2482 : : {
2483 : 2 : stop_skipping_changes();
2484 : :
2485 : : /*
2486 : : * Start a new transaction to clear the subskiplsn, if not started
2487 : : * yet.
2488 : : */
2489 [ + + ]: 2 : if (!IsTransactionState())
2490 : 1 : StartTransactionCommand();
2491 : : }
2492 : :
1719 2493 [ + - ]: 485 : if (IsTransactionState())
2494 : : {
2495 : : /*
2496 : : * The transaction is either non-empty or skipped, so we clear the
2497 : : * subskiplsn.
2498 : : */
1316 2499 : 485 : clear_subscription_skip_lsn(commit_data->commit_lsn);
2500 : :
2501 : : /*
2502 : : * Update origin state so we can restart streaming from correct
2503 : : * position in case of crash.
2504 : : */
1796 2505 : 485 : replorigin_session_origin_lsn = commit_data->end_lsn;
2506 : 485 : replorigin_session_origin_timestamp = commit_data->committime;
2507 : :
2508 : 485 : CommitTransactionCommand();
2509 : :
1023 2510 [ + + ]: 485 : if (IsTransactionBlock())
2511 : : {
2512 : 4 : EndTransactionBlock(false);
2513 : 4 : CommitTransactionCommand();
2514 : : }
2515 : :
1796 2516 : 485 : pgstat_report_stat(false);
2517 : :
1023 2518 : 485 : store_flush_position(commit_data->end_lsn, XactLastCommitEnd);
2519 : : }
2520 : : else
2521 : : {
2522 : : /* Process any invalidation messages that might have accumulated. */
1796 akapila@postgresql.o 2523 :UBC 0 : AcceptInvalidationMessages();
2524 : 0 : maybe_reread_subscription();
2525 : : }
2526 : :
1796 akapila@postgresql.o 2527 :CBC 485 : in_remote_transaction = false;
2528 : 485 : }
2529 : :
2530 : : /*
2531 : : * Handle RELATION message.
2532 : : *
2533 : : * Note we don't do validation against local schema here. The validation
2534 : : * against local schema is postponed until first change for given relation
2535 : : * comes as we only care about it when applying changes for it anyway and we
2536 : : * do less locking this way.
2537 : : */
2538 : : static void
3204 peter_e@gmx.net 2539 : 473 : apply_handle_relation(StringInfo s)
2540 : : {
2541 : : LogicalRepRelation *rel;
2542 : :
1797 akapila@postgresql.o 2543 [ + + ]: 473 : if (handle_streamed_transaction(LOGICAL_REP_MSG_RELATION, s))
1881 2544 : 35 : return;
2545 : :
3204 peter_e@gmx.net 2546 : 438 : rel = logicalrep_read_rel(s);
2547 : 438 : logicalrep_relmap_update(rel);
2548 : :
2549 : : /* Also reset all entries in the partition map that refer to remoterel. */
1230 akapila@postgresql.o 2550 : 438 : logicalrep_partmap_reset_relmap(rel);
2551 : : }
2552 : :
2553 : : /*
2554 : : * Handle TYPE message.
2555 : : *
2556 : : * This implementation pays no attention to TYPE messages; we expect the user
2557 : : * to have set things up so that the incoming data is acceptable to the input
2558 : : * functions for the locally subscribed tables. Hence, we just read and
2559 : : * discard the message.
2560 : : */
2561 : : static void
3204 peter_e@gmx.net 2562 : 18 : apply_handle_type(StringInfo s)
2563 : : {
2564 : : LogicalRepTyp typ;
2565 : :
1797 akapila@postgresql.o 2566 [ - + ]: 18 : if (handle_streamed_transaction(LOGICAL_REP_MSG_TYPE, s))
1881 akapila@postgresql.o 2567 :UBC 0 : return;
2568 : :
3204 peter_e@gmx.net 2569 :CBC 18 : logicalrep_read_typ(s, &typ);
2570 : : }
2571 : :
2572 : : /*
2573 : : * Check that we (the subscription owner) have sufficient privileges on the
2574 : : * target relation to perform the given operation.
2575 : : */
2576 : : static void
1390 jdavis@postgresql.or 2577 : 220378 : TargetPrivilegesCheck(Relation rel, AclMode mode)
2578 : : {
2579 : : Oid relid;
2580 : : AclResult aclresult;
2581 : :
2582 : 220378 : relid = RelationGetRelid(rel);
2583 : 220378 : aclresult = pg_class_aclcheck(relid, GetUserId(), mode);
2584 [ + + ]: 220378 : if (aclresult != ACLCHECK_OK)
2585 : 9 : aclcheck_error(aclresult,
2586 : 9 : get_relkind_objtype(rel->rd_rel->relkind),
2587 : 9 : get_rel_name(relid));
2588 : :
2589 : : /*
2590 : : * We lack the infrastructure to honor RLS policies. It might be possible
2591 : : * to add such infrastructure here, but tablesync workers lack it, too, so
2592 : : * we don't bother. RLS does not ordinarily apply to TRUNCATE commands,
2593 : : * but it seems dangerous to replicate a TRUNCATE and then refuse to
2594 : : * replicate subsequent INSERTs, so we forbid all commands the same.
2595 : : */
2596 [ + + ]: 220369 : if (check_enable_rls(relid, InvalidOid, false) == RLS_ENABLED)
2597 [ + - ]: 3 : ereport(ERROR,
2598 : : (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
2599 : : errmsg("user \"%s\" cannot replicate into relation with row-level security enabled: \"%s\"",
2600 : : GetUserNameFromId(GetUserId(), true),
2601 : : RelationGetRelationName(rel))));
2602 : 220366 : }
2603 : :
2604 : : /*
2605 : : * Handle INSERT message.
2606 : : */
2607 : :
2608 : : static void
3204 peter_e@gmx.net 2609 : 185901 : apply_handle_insert(StringInfo s)
2610 : : {
2611 : : LogicalRepRelMapEntry *rel;
2612 : : LogicalRepTupleData newtup;
2613 : : LogicalRepRelId relid;
2614 : : UserContext ucxt;
2615 : : ApplyExecutionData *edata;
2616 : : EState *estate;
2617 : : TupleTableSlot *remoteslot;
2618 : : MemoryContext oldctx;
2619 : : bool run_as_owner;
2620 : :
2621 : : /*
2622 : : * Quick return if we are skipping data modification changes or handling
2623 : : * streamed transactions.
2624 : : */
1316 akapila@postgresql.o 2625 [ + + + + ]: 361801 : if (is_skipping_changes() ||
2626 : 175900 : handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))
1881 2627 : 110056 : return;
2628 : :
1601 tgl@sss.pgh.pa.us 2629 : 75894 : begin_replication_step();
2630 : :
3204 peter_e@gmx.net 2631 : 75892 : relid = logicalrep_read_insert(s, &newtup);
2632 : 75892 : rel = logicalrep_rel_open(relid, RowExclusiveLock);
3141 2633 [ + + ]: 75884 : if (!should_apply_changes_for_rel(rel))
2634 : : {
2635 : : /*
2636 : : * The relation can't become interesting in the middle of the
2637 : : * transaction so it's safe to unlock it.
2638 : : */
2639 : 49 : logicalrep_rel_close(rel, RowExclusiveLock);
1601 tgl@sss.pgh.pa.us 2640 : 49 : end_replication_step();
3141 peter_e@gmx.net 2641 : 49 : return;
2642 : : }
2643 : :
2644 : : /*
2645 : : * Make sure that any user-supplied code runs as the table owner, unless
2646 : : * the user has opted out of that behavior.
2647 : : */
938 rhaas@postgresql.org 2648 : 75835 : run_as_owner = MySubscription->runasowner;
2649 [ + + ]: 75835 : if (!run_as_owner)
2650 : 75826 : SwitchToUntrustedUser(rel->localrel->rd_rel->relowner, &ucxt);
2651 : :
2652 : : /* Set relation for error callback */
1523 akapila@postgresql.o 2653 : 75835 : apply_error_callback_arg.rel = rel;
2654 : :
2655 : : /* Initialize the executor state. */
1620 tgl@sss.pgh.pa.us 2656 : 75835 : edata = create_edata_for_relation(rel);
2657 : 75835 : estate = edata->estate;
2811 andres@anarazel.de 2658 : 75835 : remoteslot = ExecInitExtraTupleSlot(estate,
2539 2659 : 75835 : RelationGetDescr(rel->localrel),
2660 : : &TTSOpsVirtual);
2661 : :
2662 : : /* Process and store remote tuple in the slot */
3204 peter_e@gmx.net 2663 [ - + ]: 75835 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
1928 tgl@sss.pgh.pa.us 2664 : 75835 : slot_store_data(remoteslot, rel, &newtup);
3204 peter_e@gmx.net 2665 : 75835 : slot_fill_defaults(rel, estate, remoteslot);
2666 : 75835 : MemoryContextSwitchTo(oldctx);
2667 : :
2668 : : /* For a partitioned table, insert the tuple into a partition. */
2031 peter@eisentraut.org 2669 [ + + ]: 75835 : if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
1620 tgl@sss.pgh.pa.us 2670 : 61 : apply_handle_tuple_routing(edata,
2671 : : remoteslot, NULL, CMD_INSERT);
2672 : : else
2673 : : {
251 2674 : 75774 : ResultRelInfo *relinfo = edata->targetRelInfo;
2675 : :
203 akapila@postgresql.o 2676 : 75774 : ExecOpenIndices(relinfo, false);
251 tgl@sss.pgh.pa.us 2677 : 75774 : apply_handle_insert_internal(edata, relinfo, remoteslot);
2678 : 75758 : ExecCloseIndices(relinfo);
2679 : : }
2680 : :
1620 2681 : 75802 : finish_edata(edata);
2682 : :
2683 : : /* Reset relation for error callback */
1523 akapila@postgresql.o 2684 : 75802 : apply_error_callback_arg.rel = NULL;
2685 : :
938 rhaas@postgresql.org 2686 [ + + ]: 75802 : if (!run_as_owner)
2687 : 75797 : RestoreUserContext(&ucxt);
2688 : :
3204 peter_e@gmx.net 2689 : 75802 : logicalrep_rel_close(rel, NoLock);
2690 : :
1601 tgl@sss.pgh.pa.us 2691 : 75802 : end_replication_step();
2692 : : }
2693 : :
2694 : : /*
2695 : : * Workhorse for apply_handle_insert()
2696 : : * relinfo is for the relation we're actually inserting into
2697 : : * (could be a child partition of edata->targetRelInfo)
2698 : : */
2699 : : static void
1620 2700 : 75836 : apply_handle_insert_internal(ApplyExecutionData *edata,
2701 : : ResultRelInfo *relinfo,
2702 : : TupleTableSlot *remoteslot)
2703 : : {
2704 : 75836 : EState *estate = edata->estate;
2705 : :
2706 : : /* Caller should have opened indexes already. */
251 2707 [ + + + + : 75836 : Assert(relinfo->ri_IndexRelationDescs != NULL ||
- + ]
2708 : : !relinfo->ri_RelationDesc->rd_rel->relhasindex ||
2709 : : RelationGetIndexList(relinfo->ri_RelationDesc) == NIL);
2710 : :
2711 : : /* Caller will not have done this bit. */
2712 [ - + ]: 75836 : Assert(relinfo->ri_onConflictArbiterIndexes == NIL);
434 akapila@postgresql.o 2713 : 75836 : InitConflictIndexes(relinfo);
2714 : :
2715 : : /* Do the insert. */
1390 jdavis@postgresql.or 2716 : 75836 : TargetPrivilegesCheck(relinfo->ri_RelationDesc, ACL_INSERT);
1840 heikki.linnakangas@i 2717 : 75828 : ExecSimpleRelationInsert(relinfo, estate, remoteslot);
2044 peter@eisentraut.org 2718 : 75803 : }
2719 : :
2720 : : /*
2721 : : * Check if the logical replication relation is updatable and throw
2722 : : * appropriate error if it isn't.
2723 : : */
2724 : : static void
3204 peter_e@gmx.net 2725 : 72303 : check_relation_updatable(LogicalRepRelMapEntry *rel)
2726 : : {
2727 : : /*
2728 : : * For partitioned tables, we only need to care if the target partition is
2729 : : * updatable (aka has PK or RI defined for it).
2730 : : */
1225 akapila@postgresql.o 2731 [ + + ]: 72303 : if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
2732 : 30 : return;
2733 : :
2734 : : /* Updatable, no error. */
3204 peter_e@gmx.net 2735 [ + - ]: 72273 : if (rel->updatable)
2736 : 72273 : return;
2737 : :
2738 : : /*
2739 : : * We are in error mode so it's fine this is somewhat slow. It's better to
2740 : : * give user correct error.
2741 : : */
3204 peter_e@gmx.net 2742 [ # # ]:UBC 0 : if (OidIsValid(GetRelationIdentityOrPK(rel->localrel)))
2743 : : {
2744 [ # # ]: 0 : ereport(ERROR,
2745 : : (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
2746 : : errmsg("publisher did not send replica identity column "
2747 : : "expected by the logical replication target relation \"%s.%s\"",
2748 : : rel->remoterel.nspname, rel->remoterel.relname)));
2749 : : }
2750 : :
2751 [ # # ]: 0 : ereport(ERROR,
2752 : : (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
2753 : : errmsg("logical replication target relation \"%s.%s\" has "
2754 : : "neither REPLICA IDENTITY index nor PRIMARY "
2755 : : "KEY and published relation does not have "
2756 : : "REPLICA IDENTITY FULL",
2757 : : rel->remoterel.nspname, rel->remoterel.relname)));
2758 : : }
2759 : :
2760 : : /*
2761 : : * Handle UPDATE message.
2762 : : *
2763 : : * TODO: FDW support
2764 : : */
2765 : : static void
3204 peter_e@gmx.net 2766 :CBC 66177 : apply_handle_update(StringInfo s)
2767 : : {
2768 : : LogicalRepRelMapEntry *rel;
2769 : : LogicalRepRelId relid;
2770 : : UserContext ucxt;
2771 : : ApplyExecutionData *edata;
2772 : : EState *estate;
2773 : : LogicalRepTupleData oldtup;
2774 : : LogicalRepTupleData newtup;
2775 : : bool has_oldtup;
2776 : : TupleTableSlot *remoteslot;
2777 : : RTEPermissionInfo *target_perminfo;
2778 : : MemoryContext oldctx;
2779 : : bool run_as_owner;
2780 : :
2781 : : /*
2782 : : * Quick return if we are skipping data modification changes or handling
2783 : : * streamed transactions.
2784 : : */
1316 akapila@postgresql.o 2785 [ + + + + ]: 132351 : if (is_skipping_changes() ||
2786 : 66174 : handle_streamed_transaction(LOGICAL_REP_MSG_UPDATE, s))
1881 2787 : 34223 : return;
2788 : :
1601 tgl@sss.pgh.pa.us 2789 : 31954 : begin_replication_step();
2790 : :
3204 peter_e@gmx.net 2791 : 31953 : relid = logicalrep_read_update(s, &has_oldtup, &oldtup,
2792 : : &newtup);
2793 : 31953 : rel = logicalrep_rel_open(relid, RowExclusiveLock);
3141 2794 [ - + ]: 31953 : if (!should_apply_changes_for_rel(rel))
2795 : : {
2796 : : /*
2797 : : * The relation can't become interesting in the middle of the
2798 : : * transaction so it's safe to unlock it.
2799 : : */
3141 peter_e@gmx.net 2800 :UBC 0 : logicalrep_rel_close(rel, RowExclusiveLock);
1601 tgl@sss.pgh.pa.us 2801 : 0 : end_replication_step();
3141 peter_e@gmx.net 2802 : 0 : return;
2803 : : }
2804 : :
2805 : : /* Set relation for error callback */
1523 akapila@postgresql.o 2806 :CBC 31953 : apply_error_callback_arg.rel = rel;
2807 : :
2808 : : /* Check if we can do the update. */
3204 peter_e@gmx.net 2809 : 31953 : check_relation_updatable(rel);
2810 : :
2811 : : /*
2812 : : * Make sure that any user-supplied code runs as the table owner, unless
2813 : : * the user has opted out of that behavior.
2814 : : */
938 rhaas@postgresql.org 2815 : 31953 : run_as_owner = MySubscription->runasowner;
2816 [ + + ]: 31953 : if (!run_as_owner)
2817 : 31950 : SwitchToUntrustedUser(rel->localrel->rd_rel->relowner, &ucxt);
2818 : :
2819 : : /* Initialize the executor state. */
1620 tgl@sss.pgh.pa.us 2820 : 31952 : edata = create_edata_for_relation(rel);
2821 : 31952 : estate = edata->estate;
2811 andres@anarazel.de 2822 : 31952 : remoteslot = ExecInitExtraTupleSlot(estate,
2539 2823 : 31952 : RelationGetDescr(rel->localrel),
2824 : : &TTSOpsVirtual);
2825 : :
2826 : : /*
2827 : : * Populate updatedCols so that per-column triggers can fire, and so
2828 : : * executor can correctly pass down indexUnchanged hint. This could
2829 : : * include more columns than were actually changed on the publisher
2830 : : * because the logical replication protocol doesn't contain that
2831 : : * information. But it would for example exclude columns that only exist
2832 : : * on the subscriber, since we are not touching those.
2833 : : */
1057 alvherre@alvh.no-ip. 2834 : 31952 : target_perminfo = list_nth(estate->es_rteperminfos, 0);
2122 peter@eisentraut.org 2835 [ + + ]: 159374 : for (int i = 0; i < remoteslot->tts_tupleDescriptor->natts; i++)
2836 : : {
6 drowley@postgresql.o 2837 :GNC 127422 : CompactAttribute *att = TupleDescCompactAttr(remoteslot->tts_tupleDescriptor, i);
1926 tgl@sss.pgh.pa.us 2838 :CBC 127422 : int remoteattnum = rel->attrmap->attnums[i];
2839 : :
2840 [ + + + + ]: 127422 : if (!att->attisdropped && remoteattnum >= 0)
2841 : : {
2842 [ - + ]: 68905 : Assert(remoteattnum < newtup.ncols);
2843 [ + + ]: 68905 : if (newtup.colstatus[remoteattnum] != LOGICALREP_COLUMN_UNCHANGED)
1057 alvherre@alvh.no-ip. 2844 : 68902 : target_perminfo->updatedCols =
2845 : 68902 : bms_add_member(target_perminfo->updatedCols,
2846 : : i + 1 - FirstLowInvalidHeapAttributeNumber);
2847 : : }
2848 : : }
2849 : :
2850 : : /* Build the search tuple. */
3204 peter_e@gmx.net 2851 [ - + ]: 31952 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
1928 tgl@sss.pgh.pa.us 2852 : 31952 : slot_store_data(remoteslot, rel,
2853 [ + + ]: 31952 : has_oldtup ? &oldtup : &newtup);
3204 peter_e@gmx.net 2854 : 31952 : MemoryContextSwitchTo(oldctx);
2855 : :
2856 : : /* For a partitioned table, apply update to correct partition. */
2031 peter@eisentraut.org 2857 [ + + ]: 31952 : if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
1620 tgl@sss.pgh.pa.us 2858 : 13 : apply_handle_tuple_routing(edata,
2859 : : remoteslot, &newtup, CMD_UPDATE);
2860 : : else
2861 : 31939 : apply_handle_update_internal(edata, edata->targetRelInfo,
2862 : : remoteslot, &newtup, rel->localindexoid);
2863 : :
2864 : 31946 : finish_edata(edata);
2865 : :
2866 : : /* Reset relation for error callback */
1523 akapila@postgresql.o 2867 : 31946 : apply_error_callback_arg.rel = NULL;
2868 : :
938 rhaas@postgresql.org 2869 [ + + ]: 31946 : if (!run_as_owner)
2870 : 31944 : RestoreUserContext(&ucxt);
2871 : :
2044 peter@eisentraut.org 2872 : 31946 : logicalrep_rel_close(rel, NoLock);
2873 : :
1601 tgl@sss.pgh.pa.us 2874 : 31946 : end_replication_step();
2875 : : }
2876 : :
2877 : : /*
2878 : : * Workhorse for apply_handle_update()
2879 : : * relinfo is for the relation we're actually updating in
2880 : : * (could be a child partition of edata->targetRelInfo)
2881 : : */
2882 : : static void
1620 2883 : 31939 : apply_handle_update_internal(ApplyExecutionData *edata,
2884 : : ResultRelInfo *relinfo,
2885 : : TupleTableSlot *remoteslot,
2886 : : LogicalRepTupleData *newtup,
2887 : : Oid localindexoid)
2888 : : {
2889 : 31939 : EState *estate = edata->estate;
2890 : 31939 : LogicalRepRelMapEntry *relmapentry = edata->targetRel;
2044 peter@eisentraut.org 2891 : 31939 : Relation localrel = relinfo->ri_RelationDesc;
2892 : : EPQState epqstate;
218 akapila@postgresql.o 2893 : 31939 : TupleTableSlot *localslot = NULL;
2894 : 31939 : ConflictTupleInfo conflicttuple = {0};
2895 : : bool found;
2896 : : MemoryContext oldctx;
2897 : :
893 tgl@sss.pgh.pa.us 2898 : 31939 : EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
203 akapila@postgresql.o 2899 : 31939 : ExecOpenIndices(relinfo, false);
2900 : :
826 msawada@postgresql.o 2901 : 31939 : found = FindReplTupleInLocalRel(edata, localrel,
2902 : : &relmapentry->remoterel,
2903 : : localindexoid,
2904 : : remoteslot, &localslot);
2905 : :
2906 : : /*
2907 : : * Tuple found.
2908 : : *
2909 : : * Note this will fail if there are other conflicting unique indexes.
2910 : : */
3204 peter_e@gmx.net 2911 [ + + ]: 31935 : if (found)
2912 : : {
2913 : : /*
2914 : : * Report the conflict if the tuple was modified by a different
2915 : : * origin.
2916 : : */
218 akapila@postgresql.o 2917 [ + + ]: 31913 : if (GetTupleTransactionInfo(localslot, &conflicttuple.xmin,
2918 : 2 : &conflicttuple.origin, &conflicttuple.ts) &&
2919 [ + - ]: 2 : conflicttuple.origin != replorigin_session_origin)
2920 : : {
2921 : : TupleTableSlot *newslot;
2922 : :
2923 : : /* Store the new tuple for conflict reporting */
434 2924 : 2 : newslot = table_slot_create(localrel, &estate->es_tupleTable);
2925 : 2 : slot_store_data(newslot, relmapentry, newtup);
2926 : :
218 2927 : 2 : conflicttuple.slot = localslot;
2928 : :
425 2929 : 2 : ReportApplyConflict(estate, relinfo, LOG, CT_UPDATE_ORIGIN_DIFFERS,
2930 : : remoteslot, newslot,
218 2931 : 2 : list_make1(&conflicttuple));
2932 : : }
2933 : :
2934 : : /* Process and store remote tuple in the slot */
3204 peter_e@gmx.net 2935 [ + - ]: 31913 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
1928 tgl@sss.pgh.pa.us 2936 : 31913 : slot_modify_data(remoteslot, localslot, relmapentry, newtup);
3204 peter_e@gmx.net 2937 : 31913 : MemoryContextSwitchTo(oldctx);
2938 : :
2939 : 31913 : EvalPlanQualSetSlot(&epqstate, remoteslot);
2940 : :
434 akapila@postgresql.o 2941 : 31913 : InitConflictIndexes(relinfo);
2942 : :
2943 : : /* Do the actual update. */
1390 jdavis@postgresql.or 2944 : 31913 : TargetPrivilegesCheck(relinfo->ri_RelationDesc, ACL_UPDATE);
1840 heikki.linnakangas@i 2945 : 31913 : ExecSimpleRelationUpdate(relinfo, estate, &epqstate, localslot,
2946 : : remoteslot);
2947 : : }
2948 : : else
2949 : : {
2950 : : ConflictType type;
434 akapila@postgresql.o 2951 : 22 : TupleTableSlot *newslot = localslot;
2952 : :
2953 : : /*
2954 : : * Detecting whether the tuple was recently deleted or never existed
2955 : : * is crucial to avoid misleading the user during conflict handling.
2956 : : */
85 akapila@postgresql.o 2957 [ + + ]:GNC 22 : if (FindDeletedTupleInLocalRel(localrel, localindexoid, remoteslot,
2958 : : &conflicttuple.xmin,
2959 : : &conflicttuple.origin,
2960 : 2 : &conflicttuple.ts) &&
2961 [ + - ]: 2 : conflicttuple.origin != replorigin_session_origin)
2962 : 2 : type = CT_UPDATE_DELETED;
2963 : : else
2964 : 20 : type = CT_UPDATE_MISSING;
2965 : :
2966 : : /* Store the new tuple for conflict reporting */
434 akapila@postgresql.o 2967 :CBC 22 : slot_store_data(newslot, relmapentry, newtup);
2968 : :
2969 : : /*
2970 : : * The tuple to be updated could not be found or was deleted. Do
2971 : : * nothing except for emitting a log message.
2972 : : */
85 akapila@postgresql.o 2973 :GNC 22 : ReportApplyConflict(estate, relinfo, LOG, type, remoteslot, newslot,
2974 : 22 : list_make1(&conflicttuple));
2975 : : }
2976 : :
2977 : : /* Cleanup. */
2044 peter@eisentraut.org 2978 :CBC 31933 : ExecCloseIndices(relinfo);
3204 peter_e@gmx.net 2979 : 31933 : EvalPlanQualEnd(&epqstate);
2980 : 31933 : }
2981 : :
2982 : : /*
2983 : : * Handle DELETE message.
2984 : : *
2985 : : * TODO: FDW support
2986 : : */
2987 : : static void
2988 : 81935 : apply_handle_delete(StringInfo s)
2989 : : {
2990 : : LogicalRepRelMapEntry *rel;
2991 : : LogicalRepTupleData oldtup;
2992 : : LogicalRepRelId relid;
2993 : : UserContext ucxt;
2994 : : ApplyExecutionData *edata;
2995 : : EState *estate;
2996 : : TupleTableSlot *remoteslot;
2997 : : MemoryContext oldctx;
2998 : : bool run_as_owner;
2999 : :
3000 : : /*
3001 : : * Quick return if we are skipping data modification changes or handling
3002 : : * streamed transactions.
3003 : : */
1316 akapila@postgresql.o 3004 [ + - + + ]: 163870 : if (is_skipping_changes() ||
3005 : 81935 : handle_streamed_transaction(LOGICAL_REP_MSG_DELETE, s))
1881 3006 : 41615 : return;
3007 : :
1601 tgl@sss.pgh.pa.us 3008 : 40320 : begin_replication_step();
3009 : :
3204 peter_e@gmx.net 3010 : 40320 : relid = logicalrep_read_delete(s, &oldtup);
3011 : 40320 : rel = logicalrep_rel_open(relid, RowExclusiveLock);
3141 3012 [ - + ]: 40320 : if (!should_apply_changes_for_rel(rel))
3013 : : {
3014 : : /*
3015 : : * The relation can't become interesting in the middle of the
3016 : : * transaction so it's safe to unlock it.
3017 : : */
3141 peter_e@gmx.net 3018 :UBC 0 : logicalrep_rel_close(rel, RowExclusiveLock);
1601 tgl@sss.pgh.pa.us 3019 : 0 : end_replication_step();
3141 peter_e@gmx.net 3020 : 0 : return;
3021 : : }
3022 : :
3023 : : /* Set relation for error callback */
1523 akapila@postgresql.o 3024 :CBC 40320 : apply_error_callback_arg.rel = rel;
3025 : :
3026 : : /* Check if we can do the delete. */
3204 peter_e@gmx.net 3027 : 40320 : check_relation_updatable(rel);
3028 : :
3029 : : /*
3030 : : * Make sure that any user-supplied code runs as the table owner, unless
3031 : : * the user has opted out of that behavior.
3032 : : */
938 rhaas@postgresql.org 3033 : 40320 : run_as_owner = MySubscription->runasowner;
3034 [ + + ]: 40320 : if (!run_as_owner)
3035 : 40318 : SwitchToUntrustedUser(rel->localrel->rd_rel->relowner, &ucxt);
3036 : :
3037 : : /* Initialize the executor state. */
1620 tgl@sss.pgh.pa.us 3038 : 40320 : edata = create_edata_for_relation(rel);
3039 : 40320 : estate = edata->estate;
2811 andres@anarazel.de 3040 : 40320 : remoteslot = ExecInitExtraTupleSlot(estate,
2539 3041 : 40320 : RelationGetDescr(rel->localrel),
3042 : : &TTSOpsVirtual);
3043 : :
3044 : : /* Build the search tuple. */
3204 peter_e@gmx.net 3045 [ - + ]: 40320 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
1928 tgl@sss.pgh.pa.us 3046 : 40320 : slot_store_data(remoteslot, rel, &oldtup);
3204 peter_e@gmx.net 3047 : 40320 : MemoryContextSwitchTo(oldctx);
3048 : :
3049 : : /* For a partitioned table, apply delete to correct partition. */
2031 peter@eisentraut.org 3050 [ + + ]: 40320 : if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
1620 tgl@sss.pgh.pa.us 3051 : 17 : apply_handle_tuple_routing(edata,
3052 : : remoteslot, NULL, CMD_DELETE);
3053 : : else
3054 : : {
251 3055 : 40303 : ResultRelInfo *relinfo = edata->targetRelInfo;
3056 : :
3057 : 40303 : ExecOpenIndices(relinfo, false);
3058 : 40303 : apply_handle_delete_internal(edata, relinfo,
3059 : : remoteslot, rel->localindexoid);
3060 : 40303 : ExecCloseIndices(relinfo);
3061 : : }
3062 : :
1620 3063 : 40320 : finish_edata(edata);
3064 : :
3065 : : /* Reset relation for error callback */
1523 akapila@postgresql.o 3066 : 40320 : apply_error_callback_arg.rel = NULL;
3067 : :
938 rhaas@postgresql.org 3068 [ + + ]: 40320 : if (!run_as_owner)
3069 : 40318 : RestoreUserContext(&ucxt);
3070 : :
2044 peter@eisentraut.org 3071 : 40320 : logicalrep_rel_close(rel, NoLock);
3072 : :
1601 tgl@sss.pgh.pa.us 3073 : 40320 : end_replication_step();
3074 : : }
3075 : :
3076 : : /*
3077 : : * Workhorse for apply_handle_delete()
3078 : : * relinfo is for the relation we're actually deleting from
3079 : : * (could be a child partition of edata->targetRelInfo)
3080 : : */
3081 : : static void
1620 3082 : 40320 : apply_handle_delete_internal(ApplyExecutionData *edata,
3083 : : ResultRelInfo *relinfo,
3084 : : TupleTableSlot *remoteslot,
3085 : : Oid localindexoid)
3086 : : {
3087 : 40320 : EState *estate = edata->estate;
2044 peter@eisentraut.org 3088 : 40320 : Relation localrel = relinfo->ri_RelationDesc;
1620 tgl@sss.pgh.pa.us 3089 : 40320 : LogicalRepRelation *remoterel = &edata->targetRel->remoterel;
3090 : : EPQState epqstate;
3091 : : TupleTableSlot *localslot;
218 akapila@postgresql.o 3092 : 40320 : ConflictTupleInfo conflicttuple = {0};
3093 : : bool found;
3094 : :
893 tgl@sss.pgh.pa.us 3095 : 40320 : EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
3096 : :
3097 : : /* Caller should have opened indexes already. */
251 3098 [ + + + + : 40320 : Assert(relinfo->ri_IndexRelationDescs != NULL ||
- + ]
3099 : : !localrel->rd_rel->relhasindex ||
3100 : : RelationGetIndexList(localrel) == NIL);
3101 : :
826 msawada@postgresql.o 3102 : 40320 : found = FindReplTupleInLocalRel(edata, localrel, remoterel, localindexoid,
3103 : : remoteslot, &localslot);
3104 : :
3105 : : /* If found delete it. */
3204 peter_e@gmx.net 3106 [ + + ]: 40320 : if (found)
3107 : : {
3108 : : /*
3109 : : * Report the conflict if the tuple was modified by a different
3110 : : * origin.
3111 : : */
218 akapila@postgresql.o 3112 [ + + ]: 40311 : if (GetTupleTransactionInfo(localslot, &conflicttuple.xmin,
3113 : 5 : &conflicttuple.origin, &conflicttuple.ts) &&
3114 [ + + ]: 5 : conflicttuple.origin != replorigin_session_origin)
3115 : : {
3116 : 4 : conflicttuple.slot = localslot;
425 3117 : 4 : ReportApplyConflict(estate, relinfo, LOG, CT_DELETE_ORIGIN_DIFFERS,
3118 : : remoteslot, NULL,
218 3119 : 4 : list_make1(&conflicttuple));
3120 : : }
3121 : :
3204 peter_e@gmx.net 3122 : 40311 : EvalPlanQualSetSlot(&epqstate, localslot);
3123 : :
3124 : : /* Do the actual delete. */
1390 jdavis@postgresql.or 3125 : 40311 : TargetPrivilegesCheck(relinfo->ri_RelationDesc, ACL_DELETE);
1840 heikki.linnakangas@i 3126 : 40311 : ExecSimpleRelationDelete(relinfo, estate, &epqstate, localslot);
3127 : : }
3128 : : else
3129 : : {
3130 : : /*
3131 : : * The tuple to be deleted could not be found. Do nothing except for
3132 : : * emitting a log message.
3133 : : */
434 akapila@postgresql.o 3134 : 9 : ReportApplyConflict(estate, relinfo, LOG, CT_DELETE_MISSING,
218 3135 : 9 : remoteslot, NULL, list_make1(&conflicttuple));
3136 : : }
3137 : :
3138 : : /* Cleanup. */
3204 peter_e@gmx.net 3139 : 40320 : EvalPlanQualEnd(&epqstate);
3140 : 40320 : }
3141 : :
3142 : : /*
3143 : : * Try to find a tuple received from the publication side (in 'remoteslot') in
3144 : : * the corresponding local relation using either replica identity index,
3145 : : * primary key, index or if needed, sequential scan.
3146 : : *
3147 : : * Local tuple, if found, is returned in '*localslot'.
3148 : : */
3149 : : static bool
826 msawada@postgresql.o 3150 : 72272 : FindReplTupleInLocalRel(ApplyExecutionData *edata, Relation localrel,
3151 : : LogicalRepRelation *remoterel,
3152 : : Oid localidxoid,
3153 : : TupleTableSlot *remoteslot,
3154 : : TupleTableSlot **localslot)
3155 : : {
3156 : 72272 : EState *estate = edata->estate;
3157 : : bool found;
3158 : :
3159 : : /*
3160 : : * Regardless of the top-level operation, we're performing a read here, so
3161 : : * check for SELECT privileges.
3162 : : */
1389 jdavis@postgresql.or 3163 : 72272 : TargetPrivilegesCheck(localrel, ACL_SELECT);
3164 : :
2036 peter@eisentraut.org 3165 : 72268 : *localslot = table_slot_create(localrel, &estate->es_tupleTable);
3166 : :
958 akapila@postgresql.o 3167 [ + + - + ]: 72268 : Assert(OidIsValid(localidxoid) ||
3168 : : (remoterel->replident == REPLICA_IDENTITY_FULL));
3169 : :
3170 [ + + ]: 72268 : if (OidIsValid(localidxoid))
3171 : : {
3172 : : #ifdef USE_ASSERT_CHECKING
826 msawada@postgresql.o 3173 : 72118 : Relation idxrel = index_open(localidxoid, AccessShareLock);
3174 : :
3175 : : /* Index must be PK, RI, or usable for REPLICA IDENTITY FULL tables */
412 akapila@postgresql.o 3176 [ + + + - : 72118 : Assert(GetRelationIdentityOrPK(localrel) == localidxoid ||
- + ]
3177 : : (remoterel->replident == REPLICA_IDENTITY_FULL &&
3178 : : IsIndexUsableForReplicaIdentityFull(idxrel,
3179 : : edata->targetRel->attrmap)));
826 msawada@postgresql.o 3180 : 72118 : index_close(idxrel, AccessShareLock);
3181 : : #endif
3182 : :
958 akapila@postgresql.o 3183 : 72118 : found = RelationFindReplTupleByIndex(localrel, localidxoid,
3184 : : LockTupleExclusive,
3185 : : remoteslot, *localslot);
3186 : : }
3187 : : else
2036 peter@eisentraut.org 3188 : 150 : found = RelationFindReplTupleSeq(localrel, LockTupleExclusive,
3189 : : remoteslot, *localslot);
3190 : :
3191 : 72268 : return found;
3192 : : }
3193 : :
3194 : : /*
3195 : : * Determine whether the index can reliably locate the deleted tuple in the
3196 : : * local relation.
3197 : : *
3198 : : * An index may exclude deleted tuples if it was re-indexed or re-created during
3199 : : * change application. Therefore, an index is considered usable only if the
3200 : : * conflict detection slot.xmin (conflict_detection_xmin) is greater than the
3201 : : * index tuple's xmin. This ensures that any tuples deleted prior to the index
3202 : : * creation or re-indexing are not relevant for conflict detection in the
3203 : : * current apply worker.
3204 : : *
3205 : : * Note that indexes may also be excluded if they were modified by other DDL
3206 : : * operations, such as ALTER INDEX. However, this is acceptable, as the
3207 : : * likelihood of such DDL changes coinciding with the need to scan dead
3208 : : * tuples for the update_deleted is low.
3209 : : */
3210 : : static bool
85 akapila@postgresql.o 3211 :GNC 1 : IsIndexUsableForFindingDeletedTuple(Oid localindexoid,
3212 : : TransactionId conflict_detection_xmin)
3213 : : {
3214 : : HeapTuple index_tuple;
3215 : : TransactionId index_xmin;
3216 : :
3217 : 1 : index_tuple = SearchSysCache1(INDEXRELID, ObjectIdGetDatum(localindexoid));
3218 : :
3219 [ - + ]: 1 : if (!HeapTupleIsValid(index_tuple)) /* should not happen */
85 akapila@postgresql.o 3220 [ # # ]:UNC 0 : elog(ERROR, "cache lookup failed for index %u", localindexoid);
3221 : :
3222 : : /*
3223 : : * No need to check for a frozen transaction ID, as
3224 : : * TransactionIdPrecedes() manages it internally, treating it as falling
3225 : : * behind the conflict_detection_xmin.
3226 : : */
85 akapila@postgresql.o 3227 :GNC 1 : index_xmin = HeapTupleHeaderGetXmin(index_tuple->t_data);
3228 : :
3229 : 1 : ReleaseSysCache(index_tuple);
3230 : :
3231 : 1 : return TransactionIdPrecedes(index_xmin, conflict_detection_xmin);
3232 : : }
3233 : :
3234 : : /*
3235 : : * Attempts to locate a deleted tuple in the local relation that matches the
3236 : : * values of the tuple received from the publication side (in 'remoteslot').
3237 : : * The search is performed using either the replica identity index, primary
3238 : : * key, other available index, or a sequential scan if necessary.
3239 : : *
3240 : : * Returns true if the deleted tuple is found. If found, the transaction ID,
3241 : : * origin, and commit timestamp of the deletion are stored in '*delete_xid',
3242 : : * '*delete_origin', and '*delete_time' respectively.
3243 : : */
3244 : : static bool
3245 : 24 : FindDeletedTupleInLocalRel(Relation localrel, Oid localidxoid,
3246 : : TupleTableSlot *remoteslot,
3247 : : TransactionId *delete_xid, RepOriginId *delete_origin,
3248 : : TimestampTz *delete_time)
3249 : : {
3250 : : TransactionId oldestxmin;
3251 : :
3252 : : /*
3253 : : * Return false if either dead tuples are not retained or commit timestamp
3254 : : * data is not available.
3255 : : */
3256 [ + + - + ]: 24 : if (!MySubscription->retaindeadtuples || !track_commit_timestamp)
3257 : 22 : return false;
3258 : :
3259 : : /*
3260 : : * For conflict detection, we use the leader worker's
3261 : : * oldest_nonremovable_xid value instead of invoking
3262 : : * GetOldestNonRemovableTransactionId() or using the conflict detection
3263 : : * slot's xmin. The oldest_nonremovable_xid acts as a threshold to
3264 : : * identify tuples that were recently deleted. These deleted tuples are no
3265 : : * longer visible to concurrent transactions. However, if a remote update
3266 : : * matches such a tuple, we log an update_deleted conflict.
3267 : : *
3268 : : * While GetOldestNonRemovableTransactionId() and slot.xmin may return
3269 : : * transaction IDs older than oldest_nonremovable_xid, for our current
3270 : : * purpose, it is acceptable to treat tuples deleted by transactions prior
3271 : : * to oldest_nonremovable_xid as update_missing conflicts.
3272 : : */
56 3273 [ + - ]: 2 : if (am_leader_apply_worker())
3274 : : {
3275 : 2 : oldestxmin = MyLogicalRepWorker->oldest_nonremovable_xid;
3276 : : }
3277 : : else
3278 : : {
3279 : : LogicalRepWorker *leader;
3280 : :
3281 : : /*
3282 : : * Obtain the information from the leader apply worker as only the
3283 : : * leader manages oldest_nonremovable_xid (see
3284 : : * maybe_advance_nonremovable_xid() for details).
3285 : : */
56 akapila@postgresql.o 3286 :UNC 0 : LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
3287 : 0 : leader = logicalrep_worker_find(MyLogicalRepWorker->subid,
3288 : : InvalidOid, false);
49 3289 [ # # ]: 0 : if (!leader)
3290 : : {
3291 [ # # ]: 0 : ereport(ERROR,
3292 : : (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
3293 : : errmsg("could not detect conflict as the leader apply worker has exited")));
3294 : : }
3295 : :
56 3296 [ # # ]: 0 : SpinLockAcquire(&leader->relmutex);
3297 : 0 : oldestxmin = leader->oldest_nonremovable_xid;
3298 : 0 : SpinLockRelease(&leader->relmutex);
3299 : 0 : LWLockRelease(LogicalRepWorkerLock);
3300 : : }
3301 : :
3302 : : /*
3303 : : * Return false if the leader apply worker has stopped retaining
3304 : : * information for detecting conflicts. This implies that update_deleted
3305 : : * can no longer be reliably detected.
3306 : : */
56 akapila@postgresql.o 3307 [ - + ]:GNC 2 : if (!TransactionIdIsValid(oldestxmin))
56 akapila@postgresql.o 3308 :UNC 0 : return false;
3309 : :
85 akapila@postgresql.o 3310 [ + + + - ]:GNC 3 : if (OidIsValid(localidxoid) &&
3311 : 1 : IsIndexUsableForFindingDeletedTuple(localidxoid, oldestxmin))
3312 : 1 : return RelationFindDeletedTupleInfoByIndex(localrel, localidxoid,
3313 : : remoteslot, oldestxmin,
3314 : : delete_xid, delete_origin,
3315 : : delete_time);
3316 : : else
3317 : 1 : return RelationFindDeletedTupleInfoSeq(localrel, remoteslot,
3318 : : oldestxmin, delete_xid,
3319 : : delete_origin, delete_time);
3320 : : }
3321 : :
3322 : : /*
3323 : : * This handles insert, update, delete on a partitioned table.
3324 : : */
3325 : : static void
1620 tgl@sss.pgh.pa.us 3326 :CBC 91 : apply_handle_tuple_routing(ApplyExecutionData *edata,
3327 : : TupleTableSlot *remoteslot,
3328 : : LogicalRepTupleData *newtup,
3329 : : CmdType operation)
3330 : : {
3331 : 91 : EState *estate = edata->estate;
3332 : 91 : LogicalRepRelMapEntry *relmapentry = edata->targetRel;
3333 : 91 : ResultRelInfo *relinfo = edata->targetRelInfo;
2031 peter@eisentraut.org 3334 : 91 : Relation parentrel = relinfo->ri_RelationDesc;
3335 : : ModifyTableState *mtstate;
3336 : : PartitionTupleRouting *proute;
3337 : : ResultRelInfo *partrelinfo;
3338 : : Relation partrel;
3339 : : TupleTableSlot *remoteslot_part;
3340 : : TupleConversionMap *map;
3341 : : MemoryContext oldctx;
1225 akapila@postgresql.o 3342 : 91 : LogicalRepRelMapEntry *part_entry = NULL;
3343 : 91 : AttrMap *attrmap = NULL;
3344 : :
3345 : : /* ModifyTableState is needed for ExecFindPartition(). */
1620 tgl@sss.pgh.pa.us 3346 : 91 : edata->mtstate = mtstate = makeNode(ModifyTableState);
2031 peter@eisentraut.org 3347 : 91 : mtstate->ps.plan = NULL;
3348 : 91 : mtstate->ps.state = estate;
3349 : 91 : mtstate->operation = operation;
3350 : 91 : mtstate->resultRelInfo = relinfo;
3351 : :
3352 : : /* ... as is PartitionTupleRouting. */
1620 tgl@sss.pgh.pa.us 3353 : 91 : edata->proute = proute = ExecSetupPartitionTupleRouting(estate, parentrel);
3354 : :
3355 : : /*
3356 : : * Find the partition to which the "search tuple" belongs.
3357 : : */
2031 peter@eisentraut.org 3358 [ - + ]: 91 : Assert(remoteslot != NULL);
3359 [ + - ]: 91 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
3360 : 91 : partrelinfo = ExecFindPartition(mtstate, relinfo, proute,
3361 : : remoteslot, estate);
3362 [ - + ]: 91 : Assert(partrelinfo != NULL);
3363 : 91 : partrel = partrelinfo->ri_RelationDesc;
3364 : :
3365 : : /*
3366 : : * Check for supported relkind. We need this since partitions might be of
3367 : : * unsupported relkinds; and the set of partitions can change, so checking
3368 : : * at CREATE/ALTER SUBSCRIPTION would be insufficient.
3369 : : */
1091 tgl@sss.pgh.pa.us 3370 : 91 : CheckSubscriptionRelkind(partrel->rd_rel->relkind,
5 akapila@postgresql.o 3371 :GNC 91 : relmapentry->remoterel.relkind,
1091 tgl@sss.pgh.pa.us 3372 :CBC 91 : get_namespace_name(RelationGetNamespace(partrel)),
3373 : 91 : RelationGetRelationName(partrel));
3374 : :
3375 : : /*
3376 : : * To perform any of the operations below, the tuple must match the
3377 : : * partition's rowtype. Convert if needed or just copy, using a dedicated
3378 : : * slot to store the tuple in any case.
3379 : : */
1835 heikki.linnakangas@i 3380 : 91 : remoteslot_part = partrelinfo->ri_PartitionTupleSlot;
2031 peter@eisentraut.org 3381 [ + + ]: 91 : if (remoteslot_part == NULL)
3382 : 58 : remoteslot_part = table_slot_create(partrel, &estate->es_tupleTable);
1061 alvherre@alvh.no-ip. 3383 : 91 : map = ExecGetRootToChildMap(partrelinfo, estate);
2031 peter@eisentraut.org 3384 [ + + ]: 91 : if (map != NULL)
3385 : : {
1225 akapila@postgresql.o 3386 : 33 : attrmap = map->attrMap;
3387 : 33 : remoteslot_part = execute_attr_map_slot(attrmap, remoteslot,
3388 : : remoteslot_part);
3389 : : }
3390 : : else
3391 : : {
2031 peter@eisentraut.org 3392 : 58 : remoteslot_part = ExecCopySlot(remoteslot_part, remoteslot);
3393 : 58 : slot_getallattrs(remoteslot_part);
3394 : : }
3395 : 91 : MemoryContextSwitchTo(oldctx);
3396 : :
3397 : : /* Check if we can do the update or delete on the leaf partition. */
1225 akapila@postgresql.o 3398 [ + + + + ]: 91 : if (operation == CMD_UPDATE || operation == CMD_DELETE)
3399 : : {
3400 : 30 : part_entry = logicalrep_partition_open(relmapentry, partrel,
3401 : : attrmap);
3402 : 30 : check_relation_updatable(part_entry);
3403 : : }
3404 : :
2031 peter@eisentraut.org 3405 [ + + + - ]: 91 : switch (operation)
3406 : : {
3407 : 61 : case CMD_INSERT:
1620 tgl@sss.pgh.pa.us 3408 : 61 : apply_handle_insert_internal(edata, partrelinfo,
3409 : : remoteslot_part);
2031 peter@eisentraut.org 3410 : 44 : break;
3411 : :
3412 : 17 : case CMD_DELETE:
1620 tgl@sss.pgh.pa.us 3413 : 17 : apply_handle_delete_internal(edata, partrelinfo,
3414 : : remoteslot_part,
3415 : : part_entry->localindexoid);
2031 peter@eisentraut.org 3416 : 17 : break;
3417 : :
3418 : 13 : case CMD_UPDATE:
3419 : :
3420 : : /*
3421 : : * For UPDATE, depending on whether or not the updated tuple
3422 : : * satisfies the partition's constraint, perform a simple UPDATE
3423 : : * of the partition or move the updated tuple into a different
3424 : : * suitable partition.
3425 : : */
3426 : : {
3427 : : TupleTableSlot *localslot;
3428 : : ResultRelInfo *partrelinfo_new;
3429 : : Relation partrel_new;
3430 : : bool found;
3431 : : EPQState epqstate;
218 akapila@postgresql.o 3432 : 13 : ConflictTupleInfo conflicttuple = {0};
3433 : :
3434 : : /* Get the matching local tuple from the partition. */
826 msawada@postgresql.o 3435 : 13 : found = FindReplTupleInLocalRel(edata, partrel,
3436 : : &part_entry->remoterel,
3437 : : part_entry->localindexoid,
3438 : : remoteslot_part, &localslot);
1600 tgl@sss.pgh.pa.us 3439 [ + + ]: 13 : if (!found)
3440 : : {
3441 : : ConflictType type;
434 akapila@postgresql.o 3442 : 2 : TupleTableSlot *newslot = localslot;
3443 : :
3444 : : /*
3445 : : * Detecting whether the tuple was recently deleted or
3446 : : * never existed is crucial to avoid misleading the user
3447 : : * during conflict handling.
3448 : : */
85 akapila@postgresql.o 3449 [ - + ]:GNC 2 : if (FindDeletedTupleInLocalRel(partrel,
3450 : : part_entry->localindexoid,
3451 : : remoteslot_part,
3452 : : &conflicttuple.xmin,
3453 : : &conflicttuple.origin,
85 akapila@postgresql.o 3454 :UNC 0 : &conflicttuple.ts) &&
3455 [ # # ]: 0 : conflicttuple.origin != replorigin_session_origin)
3456 : 0 : type = CT_UPDATE_DELETED;
3457 : : else
85 akapila@postgresql.o 3458 :GNC 2 : type = CT_UPDATE_MISSING;
3459 : :
3460 : : /* Store the new tuple for conflict reporting */
434 akapila@postgresql.o 3461 :CBC 2 : slot_store_data(newslot, part_entry, newtup);
3462 : :
3463 : : /*
3464 : : * The tuple to be updated could not be found or was
3465 : : * deleted. Do nothing except for emitting a log message.
3466 : : */
218 3467 : 2 : ReportApplyConflict(estate, partrelinfo, LOG,
3468 : : type, remoteslot_part, newslot,
85 akapila@postgresql.o 3469 :GNC 2 : list_make1(&conflicttuple));
3470 : :
1600 tgl@sss.pgh.pa.us 3471 :CBC 2 : return;
3472 : : }
3473 : :
3474 : : /*
3475 : : * Report the conflict if the tuple was modified by a
3476 : : * different origin.
3477 : : */
218 akapila@postgresql.o 3478 [ + + ]: 11 : if (GetTupleTransactionInfo(localslot, &conflicttuple.xmin,
3479 : : &conflicttuple.origin,
3480 : 1 : &conflicttuple.ts) &&
3481 [ + - ]: 1 : conflicttuple.origin != replorigin_session_origin)
3482 : : {
3483 : : TupleTableSlot *newslot;
3484 : :
3485 : : /* Store the new tuple for conflict reporting */
434 3486 : 1 : newslot = table_slot_create(partrel, &estate->es_tupleTable);
3487 : 1 : slot_store_data(newslot, part_entry, newtup);
3488 : :
218 3489 : 1 : conflicttuple.slot = localslot;
3490 : :
425 3491 : 1 : ReportApplyConflict(estate, partrelinfo, LOG, CT_UPDATE_ORIGIN_DIFFERS,
3492 : : remoteslot_part, newslot,
218 3493 : 1 : list_make1(&conflicttuple));
3494 : : }
3495 : :
3496 : : /*
3497 : : * Apply the update to the local tuple, putting the result in
3498 : : * remoteslot_part.
3499 : : */
1600 tgl@sss.pgh.pa.us 3500 [ + - ]: 11 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
3501 : 11 : slot_modify_data(remoteslot_part, localslot, part_entry,
3502 : : newtup);
3503 : 11 : MemoryContextSwitchTo(oldctx);
3504 : :
453 akapila@postgresql.o 3505 : 11 : EvalPlanQualInit(&epqstate, estate, NULL, NIL, -1, NIL);
3506 : :
3507 : : /*
3508 : : * Does the updated tuple still satisfy the current
3509 : : * partition's constraint?
3510 : : */
1868 tgl@sss.pgh.pa.us 3511 [ + - + + ]: 22 : if (!partrel->rd_rel->relispartition ||
2031 peter@eisentraut.org 3512 : 11 : ExecPartitionCheck(partrelinfo, remoteslot_part, estate,
3513 : : false))
3514 : : {
3515 : : /*
3516 : : * Yes, so simply UPDATE the partition. We don't call
3517 : : * apply_handle_update_internal() here, which would
3518 : : * normally do the following work, to avoid repeating some
3519 : : * work already done above to find the local tuple in the
3520 : : * partition.
3521 : : */
434 akapila@postgresql.o 3522 : 10 : InitConflictIndexes(partrelinfo);
3523 : :
2031 peter@eisentraut.org 3524 : 10 : EvalPlanQualSetSlot(&epqstate, remoteslot_part);
1390 jdavis@postgresql.or 3525 : 10 : TargetPrivilegesCheck(partrelinfo->ri_RelationDesc,
3526 : : ACL_UPDATE);
1840 heikki.linnakangas@i 3527 : 10 : ExecSimpleRelationUpdate(partrelinfo, estate, &epqstate,
3528 : : localslot, remoteslot_part);
3529 : : }
3530 : : else
3531 : : {
3532 : : /* Move the tuple into the new partition. */
3533 : :
3534 : : /*
3535 : : * New partition will be found using tuple routing, which
3536 : : * can only occur via the parent table. We might need to
3537 : : * convert the tuple to the parent's rowtype. Note that
3538 : : * this is the tuple found in the partition, not the
3539 : : * original search tuple received by this function.
3540 : : */
2031 peter@eisentraut.org 3541 [ + - ]: 1 : if (map)
3542 : : {
3543 : : TupleConversionMap *PartitionToRootMap =
893 tgl@sss.pgh.pa.us 3544 : 1 : convert_tuples_by_name(RelationGetDescr(partrel),
3545 : : RelationGetDescr(parentrel));
3546 : :
3547 : : remoteslot =
2031 peter@eisentraut.org 3548 : 1 : execute_attr_map_slot(PartitionToRootMap->attrMap,
3549 : : remoteslot_part, remoteslot);
3550 : : }
3551 : : else
3552 : : {
2031 peter@eisentraut.org 3553 :UBC 0 : remoteslot = ExecCopySlot(remoteslot, remoteslot_part);
3554 : 0 : slot_getallattrs(remoteslot);
3555 : : }
3556 : :
3557 : : /* Find the new partition. */
2031 peter@eisentraut.org 3558 [ + - ]:CBC 1 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
3559 : 1 : partrelinfo_new = ExecFindPartition(mtstate, relinfo,
3560 : : proute, remoteslot,
3561 : : estate);
3562 : 1 : MemoryContextSwitchTo(oldctx);
3563 [ - + ]: 1 : Assert(partrelinfo_new != partrelinfo);
1091 tgl@sss.pgh.pa.us 3564 : 1 : partrel_new = partrelinfo_new->ri_RelationDesc;
3565 : :
3566 : : /* Check that new partition also has supported relkind. */
3567 : 1 : CheckSubscriptionRelkind(partrel_new->rd_rel->relkind,
5 akapila@postgresql.o 3568 :GNC 1 : relmapentry->remoterel.relkind,
1091 tgl@sss.pgh.pa.us 3569 :CBC 1 : get_namespace_name(RelationGetNamespace(partrel_new)),
3570 : 1 : RelationGetRelationName(partrel_new));
3571 : :
3572 : : /* DELETE old tuple found in the old partition. */
453 akapila@postgresql.o 3573 : 1 : EvalPlanQualSetSlot(&epqstate, localslot);
3574 : 1 : TargetPrivilegesCheck(partrelinfo->ri_RelationDesc, ACL_DELETE);
3575 : 1 : ExecSimpleRelationDelete(partrelinfo, estate, &epqstate, localslot);
3576 : :
3577 : : /* INSERT new tuple into the new partition. */
3578 : :
3579 : : /*
3580 : : * Convert the replacement tuple to match the destination
3581 : : * partition rowtype.
3582 : : */
2031 peter@eisentraut.org 3583 [ + - ]: 1 : oldctx = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
1835 heikki.linnakangas@i 3584 : 1 : remoteslot_part = partrelinfo_new->ri_PartitionTupleSlot;
2031 peter@eisentraut.org 3585 [ + - ]: 1 : if (remoteslot_part == NULL)
1091 tgl@sss.pgh.pa.us 3586 : 1 : remoteslot_part = table_slot_create(partrel_new,
3587 : : &estate->es_tupleTable);
1061 alvherre@alvh.no-ip. 3588 : 1 : map = ExecGetRootToChildMap(partrelinfo_new, estate);
2031 peter@eisentraut.org 3589 [ - + ]: 1 : if (map != NULL)
3590 : : {
2031 peter@eisentraut.org 3591 :UBC 0 : remoteslot_part = execute_attr_map_slot(map->attrMap,
3592 : : remoteslot,
3593 : : remoteslot_part);
3594 : : }
3595 : : else
3596 : : {
2031 peter@eisentraut.org 3597 :CBC 1 : remoteslot_part = ExecCopySlot(remoteslot_part,
3598 : : remoteslot);
3599 : 1 : slot_getallattrs(remoteslot);
3600 : : }
3601 : 1 : MemoryContextSwitchTo(oldctx);
1620 tgl@sss.pgh.pa.us 3602 : 1 : apply_handle_insert_internal(edata, partrelinfo_new,
3603 : : remoteslot_part);
3604 : : }
3605 : :
453 akapila@postgresql.o 3606 : 11 : EvalPlanQualEnd(&epqstate);
3607 : : }
2031 peter@eisentraut.org 3608 : 11 : break;
3609 : :
2031 peter@eisentraut.org 3610 :UBC 0 : default:
3611 [ # # ]: 0 : elog(ERROR, "unrecognized CmdType: %d", (int) operation);
3612 : : break;
3613 : : }
3614 : : }
3615 : :
3616 : : /*
3617 : : * Handle TRUNCATE message.
3618 : : *
3619 : : * TODO: FDW support
3620 : : */
3621 : : static void
2761 peter_e@gmx.net 3622 :CBC 19 : apply_handle_truncate(StringInfo s)
3623 : : {
2742 tgl@sss.pgh.pa.us 3624 : 19 : bool cascade = false;
3625 : 19 : bool restart_seqs = false;
3626 : 19 : List *remote_relids = NIL;
3627 : 19 : List *remote_rels = NIL;
3628 : 19 : List *rels = NIL;
2031 peter@eisentraut.org 3629 : 19 : List *part_rels = NIL;
2742 tgl@sss.pgh.pa.us 3630 : 19 : List *relids = NIL;
3631 : 19 : List *relids_logged = NIL;
3632 : : ListCell *lc;
1621 3633 : 19 : LOCKMODE lockmode = AccessExclusiveLock;
3634 : :
3635 : : /*
3636 : : * Quick return if we are skipping data modification changes or handling
3637 : : * streamed transactions.
3638 : : */
1316 akapila@postgresql.o 3639 [ + - - + ]: 38 : if (is_skipping_changes() ||
3640 : 19 : handle_streamed_transaction(LOGICAL_REP_MSG_TRUNCATE, s))
1881 akapila@postgresql.o 3641 :UBC 0 : return;
3642 : :
1601 tgl@sss.pgh.pa.us 3643 :CBC 19 : begin_replication_step();
3644 : :
2761 peter_e@gmx.net 3645 : 19 : remote_relids = logicalrep_read_truncate(s, &cascade, &restart_seqs);
3646 : :
3647 [ + - + + : 47 : foreach(lc, remote_relids)
+ + ]
3648 : : {
3649 : 28 : LogicalRepRelId relid = lfirst_oid(lc);
3650 : : LogicalRepRelMapEntry *rel;
3651 : :
1621 akapila@postgresql.o 3652 : 28 : rel = logicalrep_rel_open(relid, lockmode);
2761 peter_e@gmx.net 3653 [ - + ]: 28 : if (!should_apply_changes_for_rel(rel))
3654 : : {
3655 : : /*
3656 : : * The relation can't become interesting in the middle of the
3657 : : * transaction so it's safe to unlock it.
3658 : : */
1621 akapila@postgresql.o 3659 :UBC 0 : logicalrep_rel_close(rel, lockmode);
2761 peter_e@gmx.net 3660 : 0 : continue;
3661 : : }
3662 : :
2761 peter_e@gmx.net 3663 :CBC 28 : remote_rels = lappend(remote_rels, rel);
1390 jdavis@postgresql.or 3664 : 28 : TargetPrivilegesCheck(rel->localrel, ACL_TRUNCATE);
2761 peter_e@gmx.net 3665 : 28 : rels = lappend(rels, rel->localrel);
3666 : 28 : relids = lappend_oid(relids, rel->localreloid);
3667 [ - + - - : 28 : if (RelationIsLogicallyLogged(rel->localrel))
- - - - -
- - - -
- ]
2745 peter_e@gmx.net 3668 :UBC 0 : relids_logged = lappend_oid(relids_logged, rel->localreloid);
3669 : :
3670 : : /*
3671 : : * Truncate partitions if we got a message to truncate a partitioned
3672 : : * table.
3673 : : */
2031 peter@eisentraut.org 3674 [ + + ]:CBC 28 : if (rel->localrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
3675 : : {
3676 : : ListCell *child;
3677 : 4 : List *children = find_all_inheritors(rel->localreloid,
3678 : : lockmode,
3679 : : NULL);
3680 : :
3681 [ + - + + : 15 : foreach(child, children)
+ + ]
3682 : : {
3683 : 11 : Oid childrelid = lfirst_oid(child);
3684 : : Relation childrel;
3685 : :
3686 [ + + ]: 11 : if (list_member_oid(relids, childrelid))
3687 : 4 : continue;
3688 : :
3689 : : /* find_all_inheritors already got lock */
3690 : 7 : childrel = table_open(childrelid, NoLock);
3691 : :
3692 : : /*
3693 : : * Ignore temp tables of other backends. See similar code in
3694 : : * ExecuteTruncate().
3695 : : */
3696 [ - + - - ]: 7 : if (RELATION_IS_OTHER_TEMP(childrel))
3697 : : {
1621 akapila@postgresql.o 3698 :UBC 0 : table_close(childrel, lockmode);
2031 peter@eisentraut.org 3699 : 0 : continue;
3700 : : }
3701 : :
1390 jdavis@postgresql.or 3702 :CBC 7 : TargetPrivilegesCheck(childrel, ACL_TRUNCATE);
2031 peter@eisentraut.org 3703 : 7 : rels = lappend(rels, childrel);
3704 : 7 : part_rels = lappend(part_rels, childrel);
3705 : 7 : relids = lappend_oid(relids, childrelid);
3706 : : /* Log this relation only if needed for logical decoding */
3707 [ - + - - : 7 : if (RelationIsLogicallyLogged(childrel))
- - - - -
- - - -
- ]
2031 peter@eisentraut.org 3708 :UBC 0 : relids_logged = lappend_oid(relids_logged, childrelid);
3709 : : }
3710 : : }
3711 : : }
3712 : :
3713 : : /*
3714 : : * Even if we used CASCADE on the upstream primary we explicitly default
3715 : : * to replaying changes without further cascading. This might be later
3716 : : * changeable with a user specified option.
3717 : : *
3718 : : * MySubscription->runasowner tells us whether we want to execute
3719 : : * replication actions as the subscription owner; the last argument to
3720 : : * TruncateGuts tells it whether we want to switch to the table owner.
3721 : : * Those are exactly opposite conditions.
3722 : : */
1664 fujii@postgresql.org 3723 :CBC 19 : ExecuteTruncateGuts(rels,
3724 : : relids,
3725 : : relids_logged,
3726 : : DROP_RESTRICT,
3727 : : restart_seqs,
938 rhaas@postgresql.org 3728 : 19 : !MySubscription->runasowner);
2761 peter_e@gmx.net 3729 [ + - + + : 47 : foreach(lc, remote_rels)
+ + ]
3730 : : {
3731 : 28 : LogicalRepRelMapEntry *rel = lfirst(lc);
3732 : :
3733 : 28 : logicalrep_rel_close(rel, NoLock);
3734 : : }
2031 peter@eisentraut.org 3735 [ + + + + : 26 : foreach(lc, part_rels)
+ + ]
3736 : : {
3737 : 7 : Relation rel = lfirst(lc);
3738 : :
3739 : 7 : table_close(rel, NoLock);
3740 : : }
3741 : :
1601 tgl@sss.pgh.pa.us 3742 : 19 : end_replication_step();
3743 : : }
3744 : :
3745 : :
3746 : : /*
3747 : : * Logical replication protocol message dispatcher.
3748 : : */
3749 : : void
3204 peter_e@gmx.net 3750 : 337348 : apply_dispatch(StringInfo s)
3751 : : {
1821 akapila@postgresql.o 3752 : 337348 : LogicalRepMsgType action = pq_getmsgbyte(s);
3753 : : LogicalRepMsgType saved_command;
3754 : :
3755 : : /*
3756 : : * Set the current command being applied. Since this function can be
3757 : : * called recursively when applying spooled changes, save the current
3758 : : * command.
3759 : : */
1523 3760 : 337348 : saved_command = apply_error_callback_arg.command;
3761 : 337348 : apply_error_callback_arg.command = action;
3762 : :
3204 peter_e@gmx.net 3763 [ + + + + : 337348 : switch (action)
+ + + + +
- + + + +
+ + + + +
- ]
3764 : : {
1821 akapila@postgresql.o 3765 : 494 : case LOGICAL_REP_MSG_BEGIN:
3204 peter_e@gmx.net 3766 : 494 : apply_handle_begin(s);
1523 akapila@postgresql.o 3767 : 494 : break;
3768 : :
1821 3769 : 445 : case LOGICAL_REP_MSG_COMMIT:
3204 peter_e@gmx.net 3770 : 445 : apply_handle_commit(s);
1523 akapila@postgresql.o 3771 : 445 : break;
3772 : :
1821 3773 : 185901 : case LOGICAL_REP_MSG_INSERT:
3204 peter_e@gmx.net 3774 : 185901 : apply_handle_insert(s);
1523 akapila@postgresql.o 3775 : 185858 : break;
3776 : :
1821 3777 : 66177 : case LOGICAL_REP_MSG_UPDATE:
3204 peter_e@gmx.net 3778 : 66177 : apply_handle_update(s);
1523 akapila@postgresql.o 3779 : 66169 : break;
3780 : :
1821 3781 : 81935 : case LOGICAL_REP_MSG_DELETE:
3204 peter_e@gmx.net 3782 : 81935 : apply_handle_delete(s);
1523 akapila@postgresql.o 3783 : 81935 : break;
3784 : :
1821 3785 : 19 : case LOGICAL_REP_MSG_TRUNCATE:
2761 peter_e@gmx.net 3786 : 19 : apply_handle_truncate(s);
1523 akapila@postgresql.o 3787 : 19 : break;
3788 : :
1821 3789 : 473 : case LOGICAL_REP_MSG_RELATION:
3204 peter_e@gmx.net 3790 : 473 : apply_handle_relation(s);
1523 akapila@postgresql.o 3791 : 473 : break;
3792 : :
1821 3793 : 18 : case LOGICAL_REP_MSG_TYPE:
3204 peter_e@gmx.net 3794 : 18 : apply_handle_type(s);
1523 akapila@postgresql.o 3795 : 18 : break;
3796 : :
1821 3797 : 7 : case LOGICAL_REP_MSG_ORIGIN:
3204 peter_e@gmx.net 3798 : 7 : apply_handle_origin(s);
1523 akapila@postgresql.o 3799 : 7 : break;
3800 : :
1666 akapila@postgresql.o 3801 :UBC 0 : case LOGICAL_REP_MSG_MESSAGE:
3802 : :
3803 : : /*
3804 : : * Logical replication does not use generic logical messages yet.
3805 : : * Although, it could be used by other applications that use this
3806 : : * output plugin.
3807 : : */
1523 3808 : 0 : break;
3809 : :
1821 akapila@postgresql.o 3810 :CBC 857 : case LOGICAL_REP_MSG_STREAM_START:
1881 3811 : 857 : apply_handle_stream_start(s);
1523 3812 : 857 : break;
3813 : :
1531 3814 : 856 : case LOGICAL_REP_MSG_STREAM_STOP:
1881 3815 : 856 : apply_handle_stream_stop(s);
1523 3816 : 854 : break;
3817 : :
1821 3818 : 38 : case LOGICAL_REP_MSG_STREAM_ABORT:
1881 3819 : 38 : apply_handle_stream_abort(s);
1523 3820 : 38 : break;
3821 : :
1821 3822 : 61 : case LOGICAL_REP_MSG_STREAM_COMMIT:
1881 3823 : 61 : apply_handle_stream_commit(s);
1523 3824 : 59 : break;
3825 : :
1567 3826 : 16 : case LOGICAL_REP_MSG_BEGIN_PREPARE:
3827 : 16 : apply_handle_begin_prepare(s);
1523 3828 : 16 : break;
3829 : :
1567 3830 : 15 : case LOGICAL_REP_MSG_PREPARE:
3831 : 15 : apply_handle_prepare(s);
1523 3832 : 14 : break;
3833 : :
1567 3834 : 20 : case LOGICAL_REP_MSG_COMMIT_PREPARED:
3835 : 20 : apply_handle_commit_prepared(s);
1523 3836 : 20 : break;
3837 : :
1567 3838 : 5 : case LOGICAL_REP_MSG_ROLLBACK_PREPARED:
3839 : 5 : apply_handle_rollback_prepared(s);
1523 3840 : 5 : break;
3841 : :
1546 3842 : 11 : case LOGICAL_REP_MSG_STREAM_PREPARE:
3843 : 11 : apply_handle_stream_prepare(s);
1523 3844 : 11 : break;
3845 : :
1523 akapila@postgresql.o 3846 :UBC 0 : default:
3847 [ # # ]: 0 : ereport(ERROR,
3848 : : (errcode(ERRCODE_PROTOCOL_VIOLATION),
3849 : : errmsg("invalid logical replication message type \"??? (%d)\"", action)));
3850 : : }
3851 : :
3852 : : /* Reset the current command */
1523 akapila@postgresql.o 3853 :CBC 337292 : apply_error_callback_arg.command = saved_command;
3204 peter_e@gmx.net 3854 : 337292 : }
3855 : :
3856 : : /*
3857 : : * Figure out which write/flush positions to report to the walsender process.
3858 : : *
3859 : : * We can't simply report back the last LSN the walsender sent us because the
3860 : : * local transaction might not yet be flushed to disk locally. Instead we
3861 : : * build a list that associates local with remote LSNs for every commit. When
3862 : : * reporting back the flush position to the sender we iterate that list and
3863 : : * check which entries on it are already locally flushed. Those we can report
3864 : : * as having been flushed.
3865 : : *
3866 : : * The have_pending_txes is true if there are outstanding transactions that
3867 : : * need to be flushed.
3868 : : */
3869 : : static void
3870 : 21469 : get_flush_position(XLogRecPtr *write, XLogRecPtr *flush,
3871 : : bool *have_pending_txes)
3872 : : {
3873 : : dlist_mutable_iter iter;
1453 rhaas@postgresql.org 3874 : 21469 : XLogRecPtr local_flush = GetFlushRecPtr(NULL);
3875 : :
3204 peter_e@gmx.net 3876 : 21469 : *write = InvalidXLogRecPtr;
3877 : 21469 : *flush = InvalidXLogRecPtr;
3878 : :
3879 [ + - + + ]: 21973 : dlist_foreach_modify(iter, &lsn_mapping)
3880 : : {
3881 : 3092 : FlushPosition *pos =
893 tgl@sss.pgh.pa.us 3882 : 3092 : dlist_container(FlushPosition, node, iter.cur);
3883 : :
3204 peter_e@gmx.net 3884 : 3092 : *write = pos->remote_end;
3885 : :
3886 [ + + ]: 3092 : if (pos->local_end <= local_flush)
3887 : : {
3888 : 504 : *flush = pos->remote_end;
3889 : 504 : dlist_delete(iter.cur);
3890 : 504 : pfree(pos);
3891 : : }
3892 : : else
3893 : : {
3894 : : /*
3895 : : * Don't want to uselessly iterate over the rest of the list which
3896 : : * could potentially be long. Instead get the last element and
3897 : : * grab the write position from there.
3898 : : */
3899 : 2588 : pos = dlist_tail_element(FlushPosition, node,
3900 : : &lsn_mapping);
3901 : 2588 : *write = pos->remote_end;
3902 : 2588 : *have_pending_txes = true;
3903 : 2588 : return;
3904 : : }
3905 : : }
3906 : :
3907 : 18881 : *have_pending_txes = !dlist_is_empty(&lsn_mapping);
3908 : : }
3909 : :
3910 : : /*
3911 : : * Store current remote/local lsn pair in the tracking list.
3912 : : */
3913 : : void
1023 akapila@postgresql.o 3914 : 551 : store_flush_position(XLogRecPtr remote_lsn, XLogRecPtr local_lsn)
3915 : : {
3916 : : FlushPosition *flushpos;
3917 : :
3918 : : /*
3919 : : * Skip for parallel apply workers, because the lsn_mapping is maintained
3920 : : * by the leader apply worker.
3921 : : */
3922 [ + + ]: 551 : if (am_parallel_apply_worker())
3923 : 19 : return;
3924 : :
3925 : : /* Need to do this in permanent context */
3094 peter_e@gmx.net 3926 : 532 : MemoryContextSwitchTo(ApplyContext);
3927 : :
3928 : : /* Track commit lsn */
3204 3929 : 532 : flushpos = (FlushPosition *) palloc(sizeof(FlushPosition));
1023 akapila@postgresql.o 3930 : 532 : flushpos->local_end = local_lsn;
3204 peter_e@gmx.net 3931 : 532 : flushpos->remote_end = remote_lsn;
3932 : :
3933 : 532 : dlist_push_tail(&lsn_mapping, &flushpos->node);
3094 3934 : 532 : MemoryContextSwitchTo(ApplyMessageContext);
3935 : : }
3936 : :
3937 : :
3938 : : /* Update statistics of the worker. */
3939 : : static void
3204 3940 : 186913 : UpdateWorkerStats(XLogRecPtr last_lsn, TimestampTz send_time, bool reply)
3941 : : {
3942 : 186913 : MyLogicalRepWorker->last_lsn = last_lsn;
3943 : 186913 : MyLogicalRepWorker->last_send_time = send_time;
3944 : 186913 : MyLogicalRepWorker->last_recv_time = GetCurrentTimestamp();
3945 [ + + ]: 186913 : if (reply)
3946 : : {
3947 : 1816 : MyLogicalRepWorker->reply_lsn = last_lsn;
3948 : 1816 : MyLogicalRepWorker->reply_time = send_time;
3949 : : }
3950 : 186913 : }
3951 : :
3952 : : /*
3953 : : * Apply main loop.
3954 : : */
3955 : : static void
3141 3956 : 404 : LogicalRepApplyLoop(XLogRecPtr last_received)
3957 : : {
2202 michael@paquier.xyz 3958 : 404 : TimestampTz last_recv_timestamp = GetCurrentTimestamp();
1880 tgl@sss.pgh.pa.us 3959 : 404 : bool ping_sent = false;
3960 : : TimeLineID tli;
3961 : : ErrorContextCallback errcallback;
97 akapila@postgresql.o 3962 :GNC 404 : RetainDeadTuplesData rdt_data = {0};
3963 : :
3964 : : /*
3965 : : * Init the ApplyMessageContext which we clean up after each replication
3966 : : * protocol message.
3967 : : */
3094 peter_e@gmx.net 3968 :CBC 404 : ApplyMessageContext = AllocSetContextCreate(ApplyContext,
3969 : : "ApplyMessageContext",
3970 : : ALLOCSET_DEFAULT_SIZES);
3971 : :
3972 : : /*
3973 : : * This memory context is used for per-stream data when the streaming mode
3974 : : * is enabled. This context is reset on each stream stop.
3975 : : */
1881 akapila@postgresql.o 3976 : 404 : LogicalStreamingContext = AllocSetContextCreate(ApplyContext,
3977 : : "LogicalStreamingContext",
3978 : : ALLOCSET_DEFAULT_SIZES);
3979 : :
3980 : : /* mark as idle, before starting to loop */
3204 peter_e@gmx.net 3981 : 404 : pgstat_report_activity(STATE_IDLE, NULL);
3982 : :
3983 : : /*
3984 : : * Push apply error context callback. Fields will be filled while applying
3985 : : * a change.
3986 : : */
1523 akapila@postgresql.o 3987 : 404 : errcallback.callback = apply_error_callback;
3988 : 404 : errcallback.previous = error_context_stack;
3989 : 404 : error_context_stack = &errcallback;
1023 3990 : 404 : apply_error_context_stack = error_context_stack;
3991 : :
3992 : : /* This outer loop iterates once per wait. */
3993 : : for (;;)
3204 peter_e@gmx.net 3994 : 19089 : {
3995 : 19493 : pgsocket fd = PGINVALID_SOCKET;
3996 : : int rc;
3997 : : int len;
3998 : 19493 : char *buf = NULL;
3999 : 19493 : bool endofstream = false;
4000 : : long wait_time;
4001 : :
3070 4002 [ + + ]: 19493 : CHECK_FOR_INTERRUPTS();
4003 : :
3094 4004 : 19492 : MemoryContextSwitchTo(ApplyMessageContext);
4005 : :
1630 alvherre@alvh.no-ip. 4006 : 19492 : len = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);
4007 : :
3204 peter_e@gmx.net 4008 [ + + ]: 19473 : if (len != 0)
4009 : : {
4010 : : /* Loop to process all available data (without blocking). */
4011 : : for (;;)
4012 : : {
4013 [ - + ]: 205396 : CHECK_FOR_INTERRUPTS();
4014 : :
4015 [ + + ]: 205396 : if (len == 0)
4016 : : {
4017 : 18474 : break;
4018 : : }
4019 [ + + ]: 186922 : else if (len < 0)
4020 : : {
4021 [ + - ]: 9 : ereport(LOG,
4022 : : (errmsg("data stream from publisher has ended")));
4023 : 9 : endofstream = true;
4024 : 9 : break;
4025 : : }
4026 : : else
4027 : : {
4028 : : int c;
4029 : : StringInfoData s;
4030 : :
874 akapila@postgresql.o 4031 [ - + ]: 186913 : if (ConfigReloadPending)
4032 : : {
874 akapila@postgresql.o 4033 :UBC 0 : ConfigReloadPending = false;
4034 : 0 : ProcessConfigFile(PGC_SIGHUP);
4035 : : }
4036 : :
4037 : : /* Reset timeout. */
3204 peter_e@gmx.net 4038 :CBC 186913 : last_recv_timestamp = GetCurrentTimestamp();
4039 : 186913 : ping_sent = false;
4040 : :
97 akapila@postgresql.o 4041 :GNC 186913 : rdt_data.last_recv_time = last_recv_timestamp;
4042 : :
4043 : : /* Ensure we are reading the data into our memory context. */
3094 peter_e@gmx.net 4044 :CBC 186913 : MemoryContextSwitchTo(ApplyMessageContext);
4045 : :
733 drowley@postgresql.o 4046 : 186913 : initReadOnlyStringInfo(&s, buf, len);
4047 : :
3204 peter_e@gmx.net 4048 : 186913 : c = pq_getmsgbyte(&s);
4049 : :
83 nathan@postgresql.or 4050 [ + + ]:GNC 186913 : if (c == PqReplMsg_WALData)
4051 : : {
4052 : : XLogRecPtr start_lsn;
4053 : : XLogRecPtr end_lsn;
4054 : : TimestampTz send_time;
4055 : :
3204 peter_e@gmx.net 4056 :CBC 184927 : start_lsn = pq_getmsgint64(&s);
4057 : 184927 : end_lsn = pq_getmsgint64(&s);
3169 tgl@sss.pgh.pa.us 4058 : 184927 : send_time = pq_getmsgint64(&s);
4059 : :
3204 peter_e@gmx.net 4060 [ + + ]: 184927 : if (last_received < start_lsn)
4061 : 149424 : last_received = start_lsn;
4062 : :
4063 [ - + ]: 184927 : if (last_received < end_lsn)
3204 peter_e@gmx.net 4064 :UBC 0 : last_received = end_lsn;
4065 : :
3204 peter_e@gmx.net 4066 :CBC 184927 : UpdateWorkerStats(last_received, send_time, false);
4067 : :
4068 : 184927 : apply_dispatch(&s);
4069 : :
97 akapila@postgresql.o 4070 :GNC 184874 : maybe_advance_nonremovable_xid(&rdt_data, false);
4071 : : }
83 nathan@postgresql.or 4072 [ + + ]: 1986 : else if (c == PqReplMsg_Keepalive)
4073 : : {
4074 : : XLogRecPtr end_lsn;
4075 : : TimestampTz timestamp;
4076 : : bool reply_requested;
4077 : :
3141 peter_e@gmx.net 4078 :CBC 1816 : end_lsn = pq_getmsgint64(&s);
3169 tgl@sss.pgh.pa.us 4079 : 1816 : timestamp = pq_getmsgint64(&s);
3204 peter_e@gmx.net 4080 : 1816 : reply_requested = pq_getmsgbyte(&s);
4081 : :
3141 4082 [ + + ]: 1816 : if (last_received < end_lsn)
4083 : 981 : last_received = end_lsn;
4084 : :
4085 : 1816 : send_feedback(last_received, reply_requested, false);
4086 : :
97 akapila@postgresql.o 4087 :GNC 1816 : maybe_advance_nonremovable_xid(&rdt_data, false);
4088 : :
3204 peter_e@gmx.net 4089 :CBC 1816 : UpdateWorkerStats(last_received, timestamp, true);
4090 : : }
83 nathan@postgresql.or 4091 [ + - ]:GNC 170 : else if (c == PqReplMsg_PrimaryStatusUpdate)
4092 : : {
97 akapila@postgresql.o 4093 : 170 : rdt_data.remote_lsn = pq_getmsgint64(&s);
4094 : 170 : rdt_data.remote_oldestxid = FullTransactionIdFromU64((uint64) pq_getmsgint64(&s));
4095 : 170 : rdt_data.remote_nextxid = FullTransactionIdFromU64((uint64) pq_getmsgint64(&s));
4096 : 170 : rdt_data.reply_time = pq_getmsgint64(&s);
4097 : :
4098 : : /*
4099 : : * This should never happen, see
4100 : : * ProcessStandbyPSRequestMessage. But if it happens
4101 : : * due to a bug, we don't want to proceed as it can
4102 : : * incorrectly advance oldest_nonremovable_xid.
4103 : : */
4104 [ - + ]: 170 : if (XLogRecPtrIsInvalid(rdt_data.remote_lsn))
97 akapila@postgresql.o 4105 [ # # ]:UNC 0 : elog(ERROR, "cannot get the latest WAL position from the publisher");
4106 : :
97 akapila@postgresql.o 4107 :GNC 170 : maybe_advance_nonremovable_xid(&rdt_data, true);
4108 : :
4109 : 170 : UpdateWorkerStats(last_received, rdt_data.reply_time, false);
4110 : : }
4111 : : /* other message types are purposefully ignored */
4112 : :
3094 peter_e@gmx.net 4113 :CBC 186860 : MemoryContextReset(ApplyMessageContext);
4114 : : }
4115 : :
1630 alvherre@alvh.no-ip. 4116 : 186860 : len = walrcv_receive(LogRepWorkerWalRcvConn, &buf, &fd);
4117 : : }
4118 : : }
4119 : :
4120 : : /* confirm all writes so far */
3041 tgl@sss.pgh.pa.us 4121 : 19420 : send_feedback(last_received, false, false);
4122 : :
4123 : : /* Reset the timestamp if no message was received */
97 akapila@postgresql.o 4124 :GNC 19420 : rdt_data.last_recv_time = 0;
4125 : :
4126 : 19420 : maybe_advance_nonremovable_xid(&rdt_data, false);
4127 : :
1881 akapila@postgresql.o 4128 [ + + + + ]:CBC 19419 : if (!in_remote_transaction && !in_streamed_transaction)
4129 : : {
4130 : : /*
4131 : : * If we didn't get any transactions for a while there might be
4132 : : * unconsumed invalidation messages in the queue, consume them
4133 : : * now.
4134 : : */
3141 peter_e@gmx.net 4135 : 2736 : AcceptInvalidationMessages();
3069 4136 : 2736 : maybe_reread_subscription();
4137 : :
4138 : : /* Process any table synchronization changes. */
12 akapila@postgresql.o 4139 :GNC 2697 : ProcessSyncingRelations(last_received);
4140 : : }
4141 : :
4142 : : /* Cleanup the memory. */
713 nathan@postgresql.or 4143 :CBC 19188 : MemoryContextReset(ApplyMessageContext);
3204 peter_e@gmx.net 4144 : 19188 : MemoryContextSwitchTo(TopMemoryContext);
4145 : :
4146 : : /* Check if we need to exit the streaming loop. */
4147 [ + + ]: 19188 : if (endofstream)
4148 : 9 : break;
4149 : :
4150 : : /*
4151 : : * Wait for more data or latch. If we have unflushed transactions,
4152 : : * wake up after WalWriterDelay to see if they've been flushed yet (in
4153 : : * which case we should send a feedback message). Otherwise, there's
4154 : : * no particular urgency about waking up unless we get data or a
4155 : : * signal.
4156 : : */
3041 tgl@sss.pgh.pa.us 4157 [ + + ]: 19179 : if (!dlist_is_empty(&lsn_mapping))
4158 : 2098 : wait_time = WalWriterDelay;
4159 : : else
4160 : 17081 : wait_time = NAPTIME_PER_CYCLE;
4161 : :
4162 : : /*
4163 : : * Ensure to wake up when it's possible to advance the non-removable
4164 : : * transaction ID, or when the retention duration may have exceeded
4165 : : * max_retention_duration.
4166 : : */
56 akapila@postgresql.o 4167 [ + + ]:GNC 19179 : if (MySubscription->retentionactive)
4168 : : {
4169 [ + + ]: 178 : if (rdt_data.phase == RDT_GET_CANDIDATE_XID &&
4170 [ - + ]: 49 : rdt_data.xid_advance_interval)
56 akapila@postgresql.o 4171 :UNC 0 : wait_time = Min(wait_time, rdt_data.xid_advance_interval);
56 akapila@postgresql.o 4172 [ + + ]:GNC 178 : else if (MySubscription->maxretention > 0)
4173 : 1 : wait_time = Min(wait_time, MySubscription->maxretention);
4174 : : }
4175 : :
3066 andres@anarazel.de 4176 :CBC 19179 : rc = WaitLatchOrSocket(MyLatch,
4177 : : WL_SOCKET_READABLE | WL_LATCH_SET |
4178 : : WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,
4179 : : fd, wait_time,
4180 : : WAIT_EVENT_LOGICAL_APPLY_MAIN);
4181 : :
4182 [ + + ]: 19179 : if (rc & WL_LATCH_SET)
4183 : : {
4184 : 696 : ResetLatch(MyLatch);
4185 [ + + ]: 696 : CHECK_FOR_INTERRUPTS();
4186 : : }
4187 : :
2142 rhaas@postgresql.org 4188 [ + + ]: 19089 : if (ConfigReloadPending)
4189 : : {
4190 : 10 : ConfigReloadPending = false;
3123 peter_e@gmx.net 4191 : 10 : ProcessConfigFile(PGC_SIGHUP);
4192 : : }
4193 : :
3204 4194 [ + + ]: 19089 : if (rc & WL_TIMEOUT)
4195 : : {
4196 : : /*
4197 : : * We didn't receive anything new. If we haven't heard anything
4198 : : * from the server for more than wal_receiver_timeout / 2, ping
4199 : : * the server. Also, if it's been longer than
4200 : : * wal_receiver_status_interval since the last update we sent,
4201 : : * send a status update to the primary anyway, to report any
4202 : : * progress in applying WAL.
4203 : : */
4204 : 222 : bool requestReply = false;
4205 : :
4206 : : /*
4207 : : * Check if time since last receive from primary has reached the
4208 : : * configured limit.
4209 : : */
4210 [ + - ]: 222 : if (wal_receiver_timeout > 0)
4211 : : {
4212 : 222 : TimestampTz now = GetCurrentTimestamp();
4213 : : TimestampTz timeout;
4214 : :
4215 : 222 : timeout =
4216 : 222 : TimestampTzPlusMilliseconds(last_recv_timestamp,
4217 : : wal_receiver_timeout);
4218 : :
4219 [ - + ]: 222 : if (now >= timeout)
3204 peter_e@gmx.net 4220 [ # # ]:UBC 0 : ereport(ERROR,
4221 : : (errcode(ERRCODE_CONNECTION_FAILURE),
4222 : : errmsg("terminating logical replication worker due to timeout")));
4223 : :
4224 : : /* Check to see if it's time for a ping. */
3204 peter_e@gmx.net 4225 [ + - ]:CBC 222 : if (!ping_sent)
4226 : : {
4227 : 222 : timeout = TimestampTzPlusMilliseconds(last_recv_timestamp,
4228 : : (wal_receiver_timeout / 2));
4229 [ - + ]: 222 : if (now >= timeout)
4230 : : {
3204 peter_e@gmx.net 4231 :UBC 0 : requestReply = true;
4232 : 0 : ping_sent = true;
4233 : : }
4234 : : }
4235 : : }
4236 : :
3204 peter_e@gmx.net 4237 :CBC 222 : send_feedback(last_received, requestReply, requestReply);
4238 : :
97 akapila@postgresql.o 4239 :GNC 222 : maybe_advance_nonremovable_xid(&rdt_data, false);
4240 : :
4241 : : /*
4242 : : * Force reporting to ensure long idle periods don't lead to
4243 : : * arbitrarily delayed stats. Stats can only be reported outside
4244 : : * of (implicit or explicit) transactions. That shouldn't lead to
4245 : : * stats being delayed for long, because transactions are either
4246 : : * sent as a whole on commit or streamed. Streamed transactions
4247 : : * are spilled to disk and applied on commit.
4248 : : */
1265 andres@anarazel.de 4249 [ + - ]:CBC 222 : if (!IsTransactionState())
4250 : 222 : pgstat_report_stat(true);
4251 : : }
4252 : : }
4253 : :
4254 : : /* Pop the error context stack */
1523 akapila@postgresql.o 4255 : 9 : error_context_stack = errcallback.previous;
1023 4256 : 9 : apply_error_context_stack = error_context_stack;
4257 : :
4258 : : /* All done */
1630 alvherre@alvh.no-ip. 4259 : 9 : walrcv_endstreaming(LogRepWorkerWalRcvConn, &tli);
3204 peter_e@gmx.net 4260 :UBC 0 : }
4261 : :
4262 : : /*
4263 : : * Send a Standby Status Update message to server.
4264 : : *
4265 : : * 'recvpos' is the latest LSN we've received data to, force is set if we need
4266 : : * to send a response to avoid timeouts.
4267 : : */
4268 : : static void
3204 peter_e@gmx.net 4269 :CBC 21458 : send_feedback(XLogRecPtr recvpos, bool force, bool requestReply)
4270 : : {
4271 : : static StringInfo reply_message = NULL;
4272 : : static TimestampTz send_time = 0;
4273 : :
4274 : : static XLogRecPtr last_recvpos = InvalidXLogRecPtr;
4275 : : static XLogRecPtr last_writepos = InvalidXLogRecPtr;
4276 : :
4277 : : XLogRecPtr writepos;
4278 : : XLogRecPtr flushpos;
4279 : : TimestampTz now;
4280 : : bool have_pending_txes;
4281 : :
4282 : : /*
4283 : : * If the user doesn't want status to be reported to the publisher, be
4284 : : * sure to exit before doing anything at all.
4285 : : */
4286 [ + + - + ]: 21458 : if (!force && wal_receiver_status_interval <= 0)
4287 : 6623 : return;
4288 : :
4289 : : /* It's legal to not pass a recvpos */
4290 [ - + ]: 21458 : if (recvpos < last_recvpos)
3204 peter_e@gmx.net 4291 :UBC 0 : recvpos = last_recvpos;
4292 : :
3204 peter_e@gmx.net 4293 :CBC 21458 : get_flush_position(&writepos, &flushpos, &have_pending_txes);
4294 : :
4295 : : /*
4296 : : * No outstanding transactions to flush, we can report the latest received
4297 : : * position. This is important for synchronous replication.
4298 : : */
4299 [ + + ]: 21458 : if (!have_pending_txes)
4300 : 18874 : flushpos = writepos = recvpos;
4301 : :
4302 [ - + ]: 21458 : if (writepos < last_writepos)
3204 peter_e@gmx.net 4303 :UBC 0 : writepos = last_writepos;
4304 : :
3204 peter_e@gmx.net 4305 [ + + ]:CBC 21458 : if (flushpos < last_flushpos)
4306 : 2542 : flushpos = last_flushpos;
4307 : :
4308 : 21458 : now = GetCurrentTimestamp();
4309 : :
4310 : : /* if we've already reported everything we're good */
4311 [ + + ]: 21458 : if (!force &&
4312 [ + + ]: 21456 : writepos == last_writepos &&
4313 [ + + ]: 6883 : flushpos == last_flushpos &&
4314 [ + + ]: 6705 : !TimestampDifferenceExceeds(send_time, now,
4315 : : wal_receiver_status_interval * 1000))
4316 : 6623 : return;
4317 : 14835 : send_time = now;
4318 : :
4319 [ + + ]: 14835 : if (!reply_message)
4320 : : {
3086 bruce@momjian.us 4321 : 404 : MemoryContext oldctx = MemoryContextSwitchTo(ApplyContext);
4322 : :
3204 peter_e@gmx.net 4323 : 404 : reply_message = makeStringInfo();
4324 : 404 : MemoryContextSwitchTo(oldctx);
4325 : : }
4326 : : else
4327 : 14431 : resetStringInfo(reply_message);
4328 : :
83 nathan@postgresql.or 4329 :GNC 14835 : pq_sendbyte(reply_message, PqReplMsg_StandbyStatusUpdate);
3051 tgl@sss.pgh.pa.us 4330 :CBC 14835 : pq_sendint64(reply_message, recvpos); /* write */
4331 : 14835 : pq_sendint64(reply_message, flushpos); /* flush */
4332 : 14835 : pq_sendint64(reply_message, writepos); /* apply */
3086 bruce@momjian.us 4333 : 14835 : pq_sendint64(reply_message, now); /* sendTime */
3204 peter_e@gmx.net 4334 : 14835 : pq_sendbyte(reply_message, requestReply); /* replyRequested */
4335 : :
113 alvherre@kurilemu.de 4336 [ + + ]:GNC 14835 : elog(DEBUG2, "sending feedback (force %d) to recv %X/%08X, write %X/%08X, flush %X/%08X",
4337 : : force,
4338 : : LSN_FORMAT_ARGS(recvpos),
4339 : : LSN_FORMAT_ARGS(writepos),
4340 : : LSN_FORMAT_ARGS(flushpos));
4341 : :
1630 alvherre@alvh.no-ip. 4342 :CBC 14835 : walrcv_send(LogRepWorkerWalRcvConn,
4343 : : reply_message->data, reply_message->len);
4344 : :
3204 peter_e@gmx.net 4345 [ + + ]: 14835 : if (recvpos > last_recvpos)
4346 : 14574 : last_recvpos = recvpos;
4347 [ + + ]: 14835 : if (writepos > last_writepos)
4348 : 14575 : last_writepos = writepos;
4349 [ + + ]: 14835 : if (flushpos > last_flushpos)
4350 : 14378 : last_flushpos = flushpos;
4351 : : }
4352 : :
4353 : : /*
4354 : : * Attempt to advance the non-removable transaction ID.
4355 : : *
4356 : : * See comments atop worker.c for details.
4357 : : */
4358 : : static void
97 akapila@postgresql.o 4359 :GNC 206502 : maybe_advance_nonremovable_xid(RetainDeadTuplesData *rdt_data,
4360 : : bool status_received)
4361 : : {
4362 [ + + ]: 206502 : if (!can_advance_nonremovable_xid(rdt_data))
4363 : 205995 : return;
4364 : :
4365 : 507 : process_rdt_phase_transition(rdt_data, status_received);
4366 : : }
4367 : :
4368 : : /*
4369 : : * Preliminary check to determine if advancing the non-removable transaction ID
4370 : : * is allowed.
4371 : : */
4372 : : static bool
4373 : 206502 : can_advance_nonremovable_xid(RetainDeadTuplesData *rdt_data)
4374 : : {
4375 : : /*
4376 : : * It is sufficient to manage non-removable transaction ID for a
4377 : : * subscription by the main apply worker to detect update_deleted reliably
4378 : : * even for table sync or parallel apply workers.
4379 : : */
4380 [ + + ]: 206502 : if (!am_leader_apply_worker())
4381 : 382 : return false;
4382 : :
4383 : : /* No need to advance if retaining dead tuples is not required */
4384 [ + + ]: 206120 : if (!MySubscription->retaindeadtuples)
4385 : 205613 : return false;
4386 : :
4387 : 507 : return true;
4388 : : }
4389 : :
4390 : : /*
4391 : : * Process phase transitions during the non-removable transaction ID
4392 : : * advancement. See comments atop worker.c for details of the transition.
4393 : : */
4394 : : static void
4395 : 749 : process_rdt_phase_transition(RetainDeadTuplesData *rdt_data,
4396 : : bool status_received)
4397 : : {
4398 [ + + + + : 749 : switch (rdt_data->phase)
+ + - ]
4399 : : {
4400 : 144 : case RDT_GET_CANDIDATE_XID:
4401 : 144 : get_candidate_xid(rdt_data);
4402 : 144 : break;
4403 : 176 : case RDT_REQUEST_PUBLISHER_STATUS:
4404 : 176 : request_publisher_status(rdt_data);
4405 : 176 : break;
4406 : 327 : case RDT_WAIT_FOR_PUBLISHER_STATUS:
4407 : 327 : wait_for_publisher_status(rdt_data, status_received);
4408 : 327 : break;
4409 : 100 : case RDT_WAIT_FOR_LOCAL_FLUSH:
4410 : 100 : wait_for_local_flush(rdt_data);
4411 : 100 : break;
56 4412 : 1 : case RDT_STOP_CONFLICT_INFO_RETENTION:
4413 : 1 : stop_conflict_info_retention(rdt_data);
4414 : 1 : break;
43 4415 : 1 : case RDT_RESUME_CONFLICT_INFO_RETENTION:
4416 : 1 : resume_conflict_info_retention(rdt_data);
43 akapila@postgresql.o 4417 :UNC 0 : break;
4418 : : }
97 akapila@postgresql.o 4419 :GNC 748 : }
4420 : :
4421 : : /*
4422 : : * Workhorse for the RDT_GET_CANDIDATE_XID phase.
4423 : : */
4424 : : static void
4425 : 144 : get_candidate_xid(RetainDeadTuplesData *rdt_data)
4426 : : {
4427 : : TransactionId oldest_running_xid;
4428 : : TimestampTz now;
4429 : :
4430 : : /*
4431 : : * Use last_recv_time when applying changes in the loop to avoid
4432 : : * unnecessary system time retrieval. If last_recv_time is not available,
4433 : : * obtain the current timestamp.
4434 : : */
4435 [ + + ]: 144 : now = rdt_data->last_recv_time ? rdt_data->last_recv_time : GetCurrentTimestamp();
4436 : :
4437 : : /*
4438 : : * Compute the candidate_xid and request the publisher status at most once
4439 : : * per xid_advance_interval. Refer to adjust_xid_advance_interval() for
4440 : : * details on how this value is dynamically adjusted. This is to avoid
4441 : : * using CPU and network resources without making much progress.
4442 : : */
4443 [ - + ]: 144 : if (!TimestampDifferenceExceeds(rdt_data->candidate_xid_time, now,
4444 : : rdt_data->xid_advance_interval))
97 akapila@postgresql.o 4445 :UNC 0 : return;
4446 : :
4447 : : /*
4448 : : * Immediately update the timer, even if the function returns later
4449 : : * without setting candidate_xid due to inactivity on the subscriber. This
4450 : : * avoids frequent calls to GetOldestActiveTransactionId.
4451 : : */
97 akapila@postgresql.o 4452 :GNC 144 : rdt_data->candidate_xid_time = now;
4453 : :
4454 : : /*
4455 : : * Consider transactions in the current database, as only dead tuples from
4456 : : * this database are required for conflict detection.
4457 : : */
4458 : 144 : oldest_running_xid = GetOldestActiveTransactionId(false, false);
4459 : :
4460 : : /*
4461 : : * Oldest active transaction ID (oldest_running_xid) can't be behind any
4462 : : * of its previously computed value.
4463 : : */
4464 [ - + ]: 144 : Assert(TransactionIdPrecedesOrEquals(MyLogicalRepWorker->oldest_nonremovable_xid,
4465 : : oldest_running_xid));
4466 : :
4467 : : /* Return if the oldest_nonremovable_xid cannot be advanced */
4468 [ + + ]: 144 : if (TransactionIdEquals(MyLogicalRepWorker->oldest_nonremovable_xid,
4469 : : oldest_running_xid))
4470 : : {
4471 : 104 : adjust_xid_advance_interval(rdt_data, false);
4472 : 104 : return;
4473 : : }
4474 : :
4475 : 40 : adjust_xid_advance_interval(rdt_data, true);
4476 : :
4477 : 40 : rdt_data->candidate_xid = oldest_running_xid;
4478 : 40 : rdt_data->phase = RDT_REQUEST_PUBLISHER_STATUS;
4479 : :
4480 : : /* process the next phase */
4481 : 40 : process_rdt_phase_transition(rdt_data, false);
4482 : : }
4483 : :
4484 : : /*
4485 : : * Workhorse for the RDT_REQUEST_PUBLISHER_STATUS phase.
4486 : : */
4487 : : static void
4488 : 176 : request_publisher_status(RetainDeadTuplesData *rdt_data)
4489 : : {
4490 : : static StringInfo request_message = NULL;
4491 : :
4492 [ + + ]: 176 : if (!request_message)
4493 : : {
4494 : 9 : MemoryContext oldctx = MemoryContextSwitchTo(ApplyContext);
4495 : :
4496 : 9 : request_message = makeStringInfo();
4497 : 9 : MemoryContextSwitchTo(oldctx);
4498 : : }
4499 : : else
4500 : 167 : resetStringInfo(request_message);
4501 : :
4502 : : /*
4503 : : * Send the current time to update the remote walsender's latest reply
4504 : : * message received time.
4505 : : */
83 nathan@postgresql.or 4506 : 176 : pq_sendbyte(request_message, PqReplMsg_PrimaryStatusRequest);
97 akapila@postgresql.o 4507 : 176 : pq_sendint64(request_message, GetCurrentTimestamp());
4508 : :
4509 [ + + ]: 176 : elog(DEBUG2, "sending publisher status request message");
4510 : :
4511 : : /* Send a request for the publisher status */
4512 : 176 : walrcv_send(LogRepWorkerWalRcvConn,
4513 : : request_message->data, request_message->len);
4514 : :
4515 : 176 : rdt_data->phase = RDT_WAIT_FOR_PUBLISHER_STATUS;
4516 : :
4517 : : /*
4518 : : * Skip calling maybe_advance_nonremovable_xid() since further transition
4519 : : * is possible only once we receive the publisher status message.
4520 : : */
4521 : 176 : }
4522 : :
4523 : : /*
4524 : : * Workhorse for the RDT_WAIT_FOR_PUBLISHER_STATUS phase.
4525 : : */
4526 : : static void
4527 : 327 : wait_for_publisher_status(RetainDeadTuplesData *rdt_data,
4528 : : bool status_received)
4529 : : {
4530 : : /*
4531 : : * Return if we have requested but not yet received the publisher status.
4532 : : */
4533 [ + + ]: 327 : if (!status_received)
4534 : 157 : return;
4535 : :
4536 : : /*
4537 : : * We don't need to maintain oldest_nonremovable_xid if we decide to stop
4538 : : * retaining conflict information for this worker.
4539 : : */
56 4540 [ - + ]: 170 : if (should_stop_conflict_info_retention(rdt_data))
4541 : : {
43 akapila@postgresql.o 4542 :UNC 0 : rdt_data->phase = RDT_STOP_CONFLICT_INFO_RETENTION;
56 4543 : 0 : return;
4544 : : }
4545 : :
97 akapila@postgresql.o 4546 [ + + ]:GNC 170 : if (!FullTransactionIdIsValid(rdt_data->remote_wait_for))
4547 : 34 : rdt_data->remote_wait_for = rdt_data->remote_nextxid;
4548 : :
4549 : : /*
4550 : : * Check if all remote concurrent transactions that were active at the
4551 : : * first status request have now completed. If completed, proceed to the
4552 : : * next phase; otherwise, continue checking the publisher status until
4553 : : * these transactions finish.
4554 : : *
4555 : : * It's possible that transactions in the commit phase during the last
4556 : : * cycle have now finished committing, but remote_oldestxid remains older
4557 : : * than remote_wait_for. This can happen if some old transaction came in
4558 : : * the commit phase when we requested status in this cycle. We do not
4559 : : * handle this case explicitly as it's rare and the benefit doesn't
4560 : : * justify the required complexity. Tracking would require either caching
4561 : : * all xids at the publisher or sending them to subscribers. The condition
4562 : : * will resolve naturally once the remaining transactions are finished.
4563 : : *
4564 : : * Directly advancing the non-removable transaction ID is possible if
4565 : : * there are no activities on the publisher since the last advancement
4566 : : * cycle. However, it requires maintaining two fields, last_remote_nextxid
4567 : : * and last_remote_lsn, within the structure for comparison with the
4568 : : * current cycle's values. Considering the minimal cost of continuing in
4569 : : * RDT_WAIT_FOR_LOCAL_FLUSH without awaiting changes, we opted not to
4570 : : * advance the transaction ID here.
4571 : : */
4572 [ + + ]: 170 : if (FullTransactionIdPrecedesOrEquals(rdt_data->remote_wait_for,
4573 : : rdt_data->remote_oldestxid))
4574 : 34 : rdt_data->phase = RDT_WAIT_FOR_LOCAL_FLUSH;
4575 : : else
4576 : 136 : rdt_data->phase = RDT_REQUEST_PUBLISHER_STATUS;
4577 : :
4578 : : /* process the next phase */
4579 : 170 : process_rdt_phase_transition(rdt_data, false);
4580 : : }
4581 : :
4582 : : /*
4583 : : * Workhorse for the RDT_WAIT_FOR_LOCAL_FLUSH phase.
4584 : : */
4585 : : static void
4586 : 100 : wait_for_local_flush(RetainDeadTuplesData *rdt_data)
4587 : : {
4588 [ + - - + ]: 100 : Assert(!XLogRecPtrIsInvalid(rdt_data->remote_lsn) &&
4589 : : TransactionIdIsValid(rdt_data->candidate_xid));
4590 : :
4591 : : /*
4592 : : * We expect the publisher and subscriber clocks to be in sync using time
4593 : : * sync service like NTP. Otherwise, we will advance this worker's
4594 : : * oldest_nonremovable_xid prematurely, leading to the removal of rows
4595 : : * required to detect update_deleted reliably. This check primarily
4596 : : * addresses scenarios where the publisher's clock falls behind; if the
4597 : : * publisher's clock is ahead, subsequent transactions will naturally bear
4598 : : * later commit timestamps, conforming to the design outlined atop
4599 : : * worker.c.
4600 : : *
4601 : : * XXX Consider waiting for the publisher's clock to catch up with the
4602 : : * subscriber's before proceeding to the next phase.
4603 : : */
4604 [ - + ]: 100 : if (TimestampDifferenceExceeds(rdt_data->reply_time,
4605 : : rdt_data->candidate_xid_time, 0))
97 akapila@postgresql.o 4606 [ # # ]:UNC 0 : ereport(ERROR,
4607 : : errmsg_internal("oldest_nonremovable_xid transaction ID could be advanced prematurely"),
4608 : : errdetail_internal("The clock on the publisher is behind that of the subscriber."));
4609 : :
4610 : : /*
4611 : : * Do not attempt to advance the non-removable transaction ID when table
4612 : : * sync is in progress. During this time, changes from a single
4613 : : * transaction may be applied by multiple table sync workers corresponding
4614 : : * to the target tables. So, it's necessary for all table sync workers to
4615 : : * apply and flush the corresponding changes before advancing the
4616 : : * transaction ID, otherwise, dead tuples that are still needed for
4617 : : * conflict detection in table sync workers could be removed prematurely.
4618 : : * However, confirming the apply and flush progress across all table sync
4619 : : * workers is complex and not worth the effort, so we simply return if not
4620 : : * all tables are in the READY state.
4621 : : *
4622 : : * Advancing the transaction ID is necessary even when no tables are
4623 : : * currently subscribed, to avoid retaining dead tuples unnecessarily.
4624 : : * While it might seem safe to skip all phases and directly assign
4625 : : * candidate_xid to oldest_nonremovable_xid during the
4626 : : * RDT_GET_CANDIDATE_XID phase in such cases, this is unsafe. If users
4627 : : * concurrently add tables to the subscription, the apply worker may not
4628 : : * process invalidations in time. Consequently,
4629 : : * HasSubscriptionTablesCached() might miss the new tables, leading to
4630 : : * premature advancement of oldest_nonremovable_xid.
4631 : : *
4632 : : * Performing the check during RDT_WAIT_FOR_LOCAL_FLUSH is safe, as
4633 : : * invalidations are guaranteed to be processed before applying changes
4634 : : * from newly added tables while waiting for the local flush to reach
4635 : : * remote_lsn.
4636 : : *
4637 : : * Additionally, even if we check for subscription tables during
4638 : : * RDT_GET_CANDIDATE_XID, they might be dropped before reaching
4639 : : * RDT_WAIT_FOR_LOCAL_FLUSH. Therefore, it's still necessary to verify
4640 : : * subscription tables at this stage to prevent unnecessary tuple
4641 : : * retention.
4642 : : */
12 akapila@postgresql.o 4643 [ + + + + ]:GNC 100 : if (HasSubscriptionTablesCached() && !AllTablesyncsReady())
4644 : : {
4645 : : TimestampTz now;
4646 : :
56 4647 : 32 : now = rdt_data->last_recv_time
4648 [ + + ]: 16 : ? rdt_data->last_recv_time : GetCurrentTimestamp();
4649 : :
4650 : : /*
4651 : : * Record the time spent waiting for table sync, it is needed for the
4652 : : * timeout check in should_stop_conflict_info_retention().
4653 : : */
4654 : 16 : rdt_data->table_sync_wait_time =
4655 : 16 : TimestampDifferenceMilliseconds(rdt_data->candidate_xid_time, now);
4656 : :
4657 : 16 : return;
4658 : : }
4659 : :
4660 : : /*
4661 : : * We don't need to maintain oldest_nonremovable_xid if we decide to stop
4662 : : * retaining conflict information for this worker.
4663 : : */
4664 [ + + ]: 84 : if (should_stop_conflict_info_retention(rdt_data))
4665 : : {
43 4666 : 1 : rdt_data->phase = RDT_STOP_CONFLICT_INFO_RETENTION;
97 4667 : 1 : return;
4668 : : }
4669 : :
4670 : : /*
4671 : : * Update and check the remote flush position if we are applying changes
4672 : : * in a loop. This is done at most once per WalWriterDelay to avoid
4673 : : * performing costly operations in get_flush_position() too frequently
4674 : : * during change application.
4675 : : */
4676 [ + + + + : 113 : if (last_flushpos < rdt_data->remote_lsn && rdt_data->last_recv_time &&
+ + ]
4677 : 30 : TimestampDifferenceExceeds(rdt_data->flushpos_update_time,
4678 : : rdt_data->last_recv_time, WalWriterDelay))
4679 : : {
4680 : : XLogRecPtr writepos;
4681 : : XLogRecPtr flushpos;
4682 : : bool have_pending_txes;
4683 : :
4684 : : /* Fetch the latest remote flush position */
4685 : 11 : get_flush_position(&writepos, &flushpos, &have_pending_txes);
4686 : :
4687 [ - + ]: 11 : if (flushpos > last_flushpos)
97 akapila@postgresql.o 4688 :UNC 0 : last_flushpos = flushpos;
4689 : :
97 akapila@postgresql.o 4690 :GNC 11 : rdt_data->flushpos_update_time = rdt_data->last_recv_time;
4691 : : }
4692 : :
4693 : : /* Return to wait for the changes to be applied */
4694 [ + + ]: 83 : if (last_flushpos < rdt_data->remote_lsn)
4695 : 50 : return;
4696 : :
4697 : : /*
4698 : : * Reaching this point implies should_stop_conflict_info_retention()
4699 : : * returned false earlier, meaning that the most recent duration for
4700 : : * advancing the non-removable transaction ID is within the
4701 : : * max_retention_duration or max_retention_duration is set to 0.
4702 : : *
4703 : : * Therefore, if conflict info retention was previously stopped due to a
4704 : : * timeout, it is now safe to resume retention.
4705 : : */
43 4706 [ + + ]: 33 : if (!MySubscription->retentionactive)
4707 : : {
4708 : 1 : rdt_data->phase = RDT_RESUME_CONFLICT_INFO_RETENTION;
4709 : 1 : return;
4710 : : }
4711 : :
4712 : : /*
4713 : : * Reaching here means the remote WAL position has been received, and all
4714 : : * transactions up to that position on the publisher have been applied and
4715 : : * flushed locally. So, we can advance the non-removable transaction ID.
4716 : : */
97 4717 [ - + ]: 32 : SpinLockAcquire(&MyLogicalRepWorker->relmutex);
4718 : 32 : MyLogicalRepWorker->oldest_nonremovable_xid = rdt_data->candidate_xid;
4719 : 32 : SpinLockRelease(&MyLogicalRepWorker->relmutex);
4720 : :
75 heikki.linnakangas@i 4721 [ + + ]: 32 : elog(DEBUG2, "confirmed flush up to remote lsn %X/%08X: new oldest_nonremovable_xid %u",
4722 : : LSN_FORMAT_ARGS(rdt_data->remote_lsn),
4723 : : rdt_data->candidate_xid);
4724 : :
4725 : : /* Notify launcher to update the xmin of the conflict slot */
97 akapila@postgresql.o 4726 : 32 : ApplyLauncherWakeup();
4727 : :
56 4728 : 32 : reset_retention_data_fields(rdt_data);
4729 : :
4730 : : /* process the next phase */
4731 : 32 : process_rdt_phase_transition(rdt_data, false);
4732 : : }
4733 : :
4734 : : /*
4735 : : * Check whether conflict information retention should be stopped due to
4736 : : * exceeding the maximum wait time (max_retention_duration).
4737 : : *
4738 : : * If retention should be stopped, return true. Otherwise, return false.
4739 : : */
4740 : : static bool
4741 : 254 : should_stop_conflict_info_retention(RetainDeadTuplesData *rdt_data)
4742 : : {
4743 : : TimestampTz now;
4744 : :
4745 [ - + ]: 254 : Assert(TransactionIdIsValid(rdt_data->candidate_xid));
4746 [ + + - + ]: 254 : Assert(rdt_data->phase == RDT_WAIT_FOR_PUBLISHER_STATUS ||
4747 : : rdt_data->phase == RDT_WAIT_FOR_LOCAL_FLUSH);
4748 : :
4749 [ + + ]: 254 : if (!MySubscription->maxretention)
4750 : 253 : return false;
4751 : :
4752 : : /*
4753 : : * Use last_recv_time when applying changes in the loop to avoid
4754 : : * unnecessary system time retrieval. If last_recv_time is not available,
4755 : : * obtain the current timestamp.
4756 : : */
4757 [ - + ]: 1 : now = rdt_data->last_recv_time ? rdt_data->last_recv_time : GetCurrentTimestamp();
4758 : :
4759 : : /*
4760 : : * Return early if the wait time has not exceeded the configured maximum
4761 : : * (max_retention_duration). Time spent waiting for table synchronization
4762 : : * is excluded from this calculation, as it occurs infrequently.
4763 : : */
4764 [ - + ]: 1 : if (!TimestampDifferenceExceeds(rdt_data->candidate_xid_time, now,
4765 : 1 : MySubscription->maxretention +
4766 : 1 : rdt_data->table_sync_wait_time))
56 akapila@postgresql.o 4767 :UNC 0 : return false;
4768 : :
56 akapila@postgresql.o 4769 :GNC 1 : return true;
4770 : : }
4771 : :
4772 : : /*
4773 : : * Workhorse for the RDT_STOP_CONFLICT_INFO_RETENTION phase.
4774 : : */
4775 : : static void
4776 : 1 : stop_conflict_info_retention(RetainDeadTuplesData *rdt_data)
4777 : : {
4778 : : /* Stop retention if not yet */
43 4779 [ + - ]: 1 : if (MySubscription->retentionactive)
4780 : : {
4781 : : /*
4782 : : * If the retention status cannot be updated (e.g., due to active
4783 : : * transaction), skip further processing to avoid inconsistent
4784 : : * retention behavior.
4785 : : */
4786 [ - + ]: 1 : if (!update_retention_status(false))
43 akapila@postgresql.o 4787 :UNC 0 : return;
4788 : :
43 akapila@postgresql.o 4789 [ - + ]:GNC 1 : SpinLockAcquire(&MyLogicalRepWorker->relmutex);
4790 : 1 : MyLogicalRepWorker->oldest_nonremovable_xid = InvalidTransactionId;
4791 : 1 : SpinLockRelease(&MyLogicalRepWorker->relmutex);
4792 : :
4793 [ + - ]: 1 : ereport(LOG,
4794 : : errmsg("logical replication worker for subscription \"%s\" has stopped retaining the information for detecting conflicts",
4795 : : MySubscription->name),
4796 : : errdetail("Retention is stopped because the apply process has not caught up with the publisher within the configured max_retention_duration."));
4797 : : }
4798 : :
4799 [ - + ]: 1 : Assert(!TransactionIdIsValid(MyLogicalRepWorker->oldest_nonremovable_xid));
4800 : :
4801 : : /*
4802 : : * If retention has been stopped, reset to the initial phase to retry
4803 : : * resuming retention. This reset is required to recalculate the current
4804 : : * wait time and resume retention if the time falls within
4805 : : * max_retention_duration.
4806 : : */
4807 : 1 : reset_retention_data_fields(rdt_data);
4808 : : }
4809 : :
4810 : : /*
4811 : : * Workhorse for the RDT_RESUME_CONFLICT_INFO_RETENTION phase.
4812 : : */
4813 : : static void
4814 : 1 : resume_conflict_info_retention(RetainDeadTuplesData *rdt_data)
4815 : : {
4816 : : /* We can't resume retention without updating retention status. */
4817 [ - + ]: 1 : if (!update_retention_status(true))
43 akapila@postgresql.o 4818 :UNC 0 : return;
4819 : :
43 akapila@postgresql.o 4820 [ + - - + ]:GNC 1 : ereport(LOG,
4821 : : errmsg("logical replication worker for subscription \"%s\" will resume retaining the information for detecting conflicts",
4822 : : MySubscription->name),
4823 : : MySubscription->maxretention
4824 : : ? errdetail("Retention is re-enabled because the apply process has caught up with the publisher within the configured max_retention_duration.")
4825 : : : errdetail("Retention is re-enabled because max_retention_duration has been set to unlimited."));
4826 : :
4827 : : /*
4828 : : * Restart the worker to let the launcher initialize
4829 : : * oldest_nonremovable_xid at startup.
4830 : : *
4831 : : * While it's technically possible to derive this value on-the-fly using
4832 : : * the conflict detection slot's xmin, doing so risks a race condition:
4833 : : * the launcher might clean slot.xmin just after retention resumes. This
4834 : : * would make oldest_nonremovable_xid unreliable, especially during xid
4835 : : * wraparound.
4836 : : *
4837 : : * Although this can be prevented by introducing heavy weight locking, the
4838 : : * complexity it will bring doesn't seem worthwhile given how rarely
4839 : : * retention is resumed.
4840 : : */
4841 : 1 : apply_worker_exit();
4842 : : }
4843 : :
4844 : : /*
4845 : : * Updates pg_subscription.subretentionactive to the given value within a
4846 : : * new transaction.
4847 : : *
4848 : : * If already inside an active transaction, skips the update and returns
4849 : : * false.
4850 : : *
4851 : : * Returns true if the update is successfully performed.
4852 : : */
4853 : : static bool
4854 : 2 : update_retention_status(bool active)
4855 : : {
4856 : : /*
4857 : : * Do not update the catalog during an active transaction. The transaction
4858 : : * may be started during change application, leading to a possible
4859 : : * rollback of catalog updates if the application fails subsequently.
4860 : : */
56 4861 [ - + ]: 2 : if (IsTransactionState())
43 akapila@postgresql.o 4862 :UNC 0 : return false;
4863 : :
56 akapila@postgresql.o 4864 :GNC 2 : StartTransactionCommand();
4865 : :
4866 : : /*
4867 : : * Updating pg_subscription might involve TOAST table access, so ensure we
4868 : : * have a valid snapshot.
4869 : : */
4870 : 2 : PushActiveSnapshot(GetTransactionSnapshot());
4871 : :
4872 : : /* Update pg_subscription.subretentionactive */
43 4873 : 2 : UpdateDeadTupleRetentionStatus(MySubscription->oid, active);
4874 : :
56 4875 : 2 : PopActiveSnapshot();
4876 : 2 : CommitTransactionCommand();
4877 : :
4878 : : /* Notify launcher to update the conflict slot */
4879 : 2 : ApplyLauncherWakeup();
4880 : :
43 4881 : 2 : MySubscription->retentionactive = active;
4882 : :
4883 : 2 : return true;
4884 : : }
4885 : :
4886 : : /*
4887 : : * Reset all data fields of RetainDeadTuplesData except those used to
4888 : : * determine the timing for the next round of transaction ID advancement. We
4889 : : * can even use flushpos_update_time in the next round to decide whether to get
4890 : : * the latest flush position.
4891 : : */
4892 : : static void
56 4893 : 33 : reset_retention_data_fields(RetainDeadTuplesData *rdt_data)
4894 : : {
97 4895 : 33 : rdt_data->phase = RDT_GET_CANDIDATE_XID;
4896 : 33 : rdt_data->remote_lsn = InvalidXLogRecPtr;
4897 : 33 : rdt_data->remote_oldestxid = InvalidFullTransactionId;
4898 : 33 : rdt_data->remote_nextxid = InvalidFullTransactionId;
4899 : 33 : rdt_data->reply_time = 0;
4900 : 33 : rdt_data->remote_wait_for = InvalidFullTransactionId;
4901 : 33 : rdt_data->candidate_xid = InvalidTransactionId;
56 4902 : 33 : rdt_data->table_sync_wait_time = 0;
97 4903 : 33 : }
4904 : :
4905 : : /*
4906 : : * Adjust the interval for advancing non-removable transaction IDs.
4907 : : *
4908 : : * If there is no activity on the node or retention has been stopped, we
4909 : : * progressively double the interval used to advance non-removable transaction
4910 : : * ID. This helps conserve CPU and network resources when there's little benefit
4911 : : * to frequent updates.
4912 : : *
4913 : : * The interval is capped by the lowest of the following:
4914 : : * - wal_receiver_status_interval (if set and retention is active),
4915 : : * - a default maximum of 3 minutes,
4916 : : * - max_retention_duration (if retention is active).
4917 : : *
4918 : : * This ensures the interval never exceeds the retention boundary, even if other
4919 : : * limits are higher. Once activity resumes on the node and the retention is
4920 : : * active, the interval is reset to lesser of 100ms and max_retention_duration,
4921 : : * allowing timely advancement of non-removable transaction ID.
4922 : : *
4923 : : * XXX The use of wal_receiver_status_interval is a bit arbitrary so we can
4924 : : * consider the other interval or a separate GUC if the need arises.
4925 : : */
4926 : : static void
4927 : 144 : adjust_xid_advance_interval(RetainDeadTuplesData *rdt_data, bool new_xid_found)
4928 : : {
43 4929 [ - + - - ]: 144 : if (rdt_data->xid_advance_interval && !new_xid_found)
97 akapila@postgresql.o 4930 :UNC 0 : {
4931 : 0 : int max_interval = wal_receiver_status_interval
4932 : 0 : ? wal_receiver_status_interval * 1000
4933 [ # # ]: 0 : : MAX_XID_ADVANCE_INTERVAL;
4934 : :
4935 : : /*
4936 : : * No new transaction ID has been assigned since the last check, so
4937 : : * double the interval, but not beyond the maximum allowable value.
4938 : : */
4939 : 0 : rdt_data->xid_advance_interval = Min(rdt_data->xid_advance_interval * 2,
4940 : : max_interval);
4941 : : }
43 akapila@postgresql.o 4942 [ - + ]:GNC 144 : else if (rdt_data->xid_advance_interval &&
43 akapila@postgresql.o 4943 [ # # ]:UNC 0 : !MySubscription->retentionactive)
4944 : : {
4945 : : /*
4946 : : * Retention has been stopped, so double the interval-capped at a
4947 : : * maximum of 3 minutes. The wal_receiver_status_interval is
4948 : : * intentionally not used as a upper bound, since the likelihood of
4949 : : * retention resuming is lower than that of general activity resuming.
4950 : : */
4951 : 0 : rdt_data->xid_advance_interval = Min(rdt_data->xid_advance_interval * 2,
4952 : : MAX_XID_ADVANCE_INTERVAL);
4953 : : }
4954 : : else
4955 : : {
4956 : : /*
4957 : : * A new transaction ID was found or the interval is not yet
4958 : : * initialized, so set the interval to the minimum value.
4959 : : */
97 akapila@postgresql.o 4960 :GNC 144 : rdt_data->xid_advance_interval = MIN_XID_ADVANCE_INTERVAL;
4961 : : }
4962 : :
4963 : : /*
4964 : : * Ensure the wait time remains within the maximum retention time limit
4965 : : * when retention is active.
4966 : : */
43 4967 [ + + ]: 144 : if (MySubscription->retentionactive)
4968 : 143 : rdt_data->xid_advance_interval = Min(rdt_data->xid_advance_interval,
4969 : : MySubscription->maxretention);
97 4970 : 144 : }
4971 : :
4972 : : /*
4973 : : * Exit routine for apply workers due to subscription parameter changes.
4974 : : */
4975 : : static void
1023 akapila@postgresql.o 4976 :CBC 43 : apply_worker_exit(void)
4977 : : {
4978 [ - + ]: 43 : if (am_parallel_apply_worker())
4979 : : {
4980 : : /*
4981 : : * Don't stop the parallel apply worker as the leader will detect the
4982 : : * subscription parameter change and restart logical replication later
4983 : : * anyway. This also prevents the leader from reporting errors when
4984 : : * trying to communicate with a stopped parallel apply worker, which
4985 : : * would accidentally disable subscriptions if disable_on_error was
4986 : : * set.
4987 : : */
1023 akapila@postgresql.o 4988 :UBC 0 : return;
4989 : : }
4990 : :
4991 : : /*
4992 : : * Reset the last-start time for this apply worker so that the launcher
4993 : : * will restart it without waiting for wal_retrieve_retry_interval if the
4994 : : * subscription is still active, and so that we won't leak that hash table
4995 : : * entry if it isn't.
4996 : : */
816 akapila@postgresql.o 4997 [ + - ]:CBC 43 : if (am_leader_apply_worker())
1010 tgl@sss.pgh.pa.us 4998 : 43 : ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);
4999 : :
1023 akapila@postgresql.o 5000 : 43 : proc_exit(0);
5001 : : }
5002 : :
5003 : : /*
5004 : : * Reread subscription info if needed.
5005 : : *
5006 : : * For significant changes, we react by exiting the current process; a new
5007 : : * one will be launched afterwards if needed.
5008 : : */
5009 : : void
3069 peter_e@gmx.net 5010 : 3759 : maybe_reread_subscription(void)
5011 : : {
5012 : : MemoryContext oldctx;
5013 : : Subscription *newsub;
3086 bruce@momjian.us 5014 : 3759 : bool started_tx = false;
5015 : :
5016 : : /* When cache state is valid there is nothing to do here. */
3069 peter_e@gmx.net 5017 [ + + ]: 3759 : if (MySubscriptionValid)
5018 : 3670 : return;
5019 : :
5020 : : /* This function might be called inside or outside of transaction. */
3141 5021 [ + + ]: 89 : if (!IsTransactionState())
5022 : : {
5023 : 82 : StartTransactionCommand();
5024 : 82 : started_tx = true;
5025 : : }
5026 : :
5027 : : /* Ensure allocations in permanent context. */
3094 5028 : 89 : oldctx = MemoryContextSwitchTo(ApplyContext);
5029 : :
3204 5030 : 89 : newsub = GetSubscription(MyLogicalRepWorker->subid, true);
5031 : :
5032 : : /*
5033 : : * Exit if the subscription was removed. This normally should not happen
5034 : : * as the worker gets killed during DROP SUBSCRIPTION.
5035 : : */
3200 5036 [ - + ]: 89 : if (!newsub)
5037 : : {
3204 peter_e@gmx.net 5038 [ # # ]:UBC 0 : ereport(LOG,
5039 : : (errmsg("logical replication worker for subscription \"%s\" will stop because the subscription was removed",
5040 : : MySubscription->name)));
5041 : :
5042 : : /* Ensure we remove no-longer-useful entry for worker's start time */
816 akapila@postgresql.o 5043 [ # # ]: 0 : if (am_leader_apply_worker())
1010 tgl@sss.pgh.pa.us 5044 : 0 : ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);
5045 : :
3204 peter_e@gmx.net 5046 : 0 : proc_exit(0);
5047 : : }
5048 : :
5049 : : /* Exit if the subscription was disabled. */
3094 peter_e@gmx.net 5050 [ + + ]:CBC 89 : if (!newsub->enabled)
5051 : : {
5052 [ + - ]: 12 : ereport(LOG,
5053 : : (errmsg("logical replication worker for subscription \"%s\" will stop because the subscription was disabled",
5054 : : MySubscription->name)));
5055 : :
1023 akapila@postgresql.o 5056 : 12 : apply_worker_exit();
5057 : : }
5058 : :
5059 : : /* !slotname should never happen when enabled is true. */
3094 peter_e@gmx.net 5060 [ - + ]: 77 : Assert(newsub->slotname);
5061 : :
5062 : : /* two-phase cannot be altered while the worker is running */
1567 akapila@postgresql.o 5063 [ - + ]: 77 : Assert(newsub->twophasestate == MySubscription->twophasestate);
5064 : :
5065 : : /*
5066 : : * Exit if any parameter that affects the remote connection was changed.
5067 : : * The launcher will start a new worker but note that the parallel apply
5068 : : * worker won't restart if the streaming option's value is changed from
5069 : : * 'parallel' to any other value or the server decides not to stream the
5070 : : * in-progress transaction.
5071 : : */
1928 tgl@sss.pgh.pa.us 5072 [ + + ]: 77 : if (strcmp(newsub->conninfo, MySubscription->conninfo) != 0 ||
5073 [ + + ]: 75 : strcmp(newsub->name, MySubscription->name) != 0 ||
5074 [ + - ]: 74 : strcmp(newsub->slotname, MySubscription->slotname) != 0 ||
5075 [ + + ]: 74 : newsub->binary != MySubscription->binary ||
1881 akapila@postgresql.o 5076 [ + + ]: 68 : newsub->stream != MySubscription->stream ||
922 5077 [ + - ]: 63 : newsub->passwordrequired != MySubscription->passwordrequired ||
1195 5078 [ + + ]: 63 : strcmp(newsub->origin, MySubscription->origin) != 0 ||
1390 jdavis@postgresql.or 5079 [ + + ]: 61 : newsub->owner != MySubscription->owner ||
1928 tgl@sss.pgh.pa.us 5080 [ + + ]: 60 : !equal(newsub->publications, MySubscription->publications))
5081 : : {
1023 akapila@postgresql.o 5082 [ - + ]: 26 : if (am_parallel_apply_worker())
1023 akapila@postgresql.o 5083 [ # # ]:UBC 0 : ereport(LOG,
5084 : : (errmsg("logical replication parallel apply worker for subscription \"%s\" will stop because of a parameter change",
5085 : : MySubscription->name)));
5086 : : else
1023 akapila@postgresql.o 5087 [ + - ]:CBC 26 : ereport(LOG,
5088 : : (errmsg("logical replication worker for subscription \"%s\" will restart because of a parameter change",
5089 : : MySubscription->name)));
5090 : :
5091 : 26 : apply_worker_exit();
5092 : : }
5093 : :
5094 : : /*
5095 : : * Exit if the subscription owner's superuser privileges have been
5096 : : * revoked.
5097 : : */
742 5098 [ + + + + ]: 51 : if (!newsub->ownersuperuser && MySubscription->ownersuperuser)
5099 : : {
5100 [ - + ]: 4 : if (am_parallel_apply_worker())
742 akapila@postgresql.o 5101 [ # # ]:UBC 0 : ereport(LOG,
5102 : : errmsg("logical replication parallel apply worker for subscription \"%s\" will stop because the subscription owner's superuser privileges have been revoked",
5103 : : MySubscription->name));
5104 : : else
742 akapila@postgresql.o 5105 [ + - ]:CBC 4 : ereport(LOG,
5106 : : errmsg("logical replication worker for subscription \"%s\" will restart because the subscription owner's superuser privileges have been revoked",
5107 : : MySubscription->name));
5108 : :
5109 : 4 : apply_worker_exit();
5110 : : }
5111 : :
5112 : : /* Check for other changes that should never happen too. */
3130 peter_e@gmx.net 5113 [ - + ]: 47 : if (newsub->dbid != MySubscription->dbid)
5114 : : {
3204 peter_e@gmx.net 5115 [ # # ]:UBC 0 : elog(ERROR, "subscription %u changed unexpectedly",
5116 : : MyLogicalRepWorker->subid);
5117 : : }
5118 : :
5119 : : /* Clean old subscription info and switch to new one. */
3204 peter_e@gmx.net 5120 :CBC 47 : FreeSubscription(MySubscription);
5121 : 47 : MySubscription = newsub;
5122 : :
5123 : 47 : MemoryContextSwitchTo(oldctx);
5124 : :
5125 : : /* Change synchronous commit according to the user's wishes */
3119 5126 : 47 : SetConfigOption("synchronous_commit", MySubscription->synccommit,
5127 : : PGC_BACKEND, PGC_S_OVERRIDE);
5128 : :
3141 5129 [ + + ]: 47 : if (started_tx)
5130 : 43 : CommitTransactionCommand();
5131 : :
3204 5132 : 47 : MySubscriptionValid = true;
5133 : : }
5134 : :
5135 : : /*
5136 : : * Callback from subscription syscache invalidation.
5137 : : */
5138 : : static void
5139 : 95 : subscription_change_cb(Datum arg, int cacheid, uint32 hashvalue)
5140 : : {
5141 : 95 : MySubscriptionValid = false;
5142 : 95 : }
5143 : :
5144 : : /*
5145 : : * subxact_info_write
5146 : : * Store information about subxacts for a toplevel transaction.
5147 : : *
5148 : : * For each subxact we store offset of its first change in the main file.
5149 : : * The file is always over-written as a whole.
5150 : : *
5151 : : * XXX We should only store subxacts that were not aborted yet.
5152 : : */
5153 : : static void
1881 akapila@postgresql.o 5154 : 372 : subxact_info_write(Oid subid, TransactionId xid)
5155 : : {
5156 : : char path[MAXPGPATH];
5157 : : Size len;
5158 : : BufFile *fd;
5159 : :
5160 [ - + ]: 372 : Assert(TransactionIdIsValid(xid));
5161 : :
5162 : : /* construct the subxact filename */
1517 5163 : 372 : subxact_filename(path, subid, xid);
5164 : :
5165 : : /* Delete the subxacts file, if exists. */
1881 5166 [ + + ]: 372 : if (subxact_data.nsubxacts == 0)
5167 : : {
1517 5168 : 290 : cleanup_subxact_info();
5169 : 290 : BufFileDeleteFileSet(MyLogicalRepWorker->stream_fileset, path, true);
5170 : :
1881 5171 : 290 : return;
5172 : : }
5173 : :
5174 : : /*
5175 : : * Create the subxact file if it not already created, otherwise open the
5176 : : * existing file.
5177 : : */
1517 5178 : 82 : fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDWR,
5179 : : true);
5180 [ + + ]: 82 : if (fd == NULL)
5181 : 8 : fd = BufFileCreateFileSet(MyLogicalRepWorker->stream_fileset, path);
5182 : :
1881 5183 : 82 : len = sizeof(SubXactInfo) * subxact_data.nsubxacts;
5184 : :
5185 : : /* Write the subxact count and subxact info */
5186 : 82 : BufFileWrite(fd, &subxact_data.nsubxacts, sizeof(subxact_data.nsubxacts));
5187 : 82 : BufFileWrite(fd, subxact_data.subxacts, len);
5188 : :
5189 : 82 : BufFileClose(fd);
5190 : :
5191 : : /* free the memory allocated for subxact info */
5192 : 82 : cleanup_subxact_info();
5193 : : }
5194 : :
5195 : : /*
5196 : : * subxact_info_read
5197 : : * Restore information about subxacts of a streamed transaction.
5198 : : *
5199 : : * Read information about subxacts into the structure subxact_data that can be
5200 : : * used later.
5201 : : */
5202 : : static void
5203 : 344 : subxact_info_read(Oid subid, TransactionId xid)
5204 : : {
5205 : : char path[MAXPGPATH];
5206 : : Size len;
5207 : : BufFile *fd;
5208 : : MemoryContext oldctx;
5209 : :
5210 [ - + ]: 344 : Assert(!subxact_data.subxacts);
5211 [ - + ]: 344 : Assert(subxact_data.nsubxacts == 0);
5212 [ - + ]: 344 : Assert(subxact_data.nsubxacts_max == 0);
5213 : :
5214 : : /*
5215 : : * If the subxact file doesn't exist that means we don't have any subxact
5216 : : * info.
5217 : : */
5218 : 344 : subxact_filename(path, subid, xid);
1517 5219 : 344 : fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset, path, O_RDONLY,
5220 : : true);
5221 [ + + ]: 344 : if (fd == NULL)
5222 : 265 : return;
5223 : :
5224 : : /* read number of subxact items */
1016 peter@eisentraut.org 5225 : 79 : BufFileReadExact(fd, &subxact_data.nsubxacts, sizeof(subxact_data.nsubxacts));
5226 : :
1881 akapila@postgresql.o 5227 : 79 : len = sizeof(SubXactInfo) * subxact_data.nsubxacts;
5228 : :
5229 : : /* we keep the maximum as a power of 2 */
48 michael@paquier.xyz 5230 :GNC 79 : subxact_data.nsubxacts_max = 1 << pg_ceil_log2_32(subxact_data.nsubxacts);
5231 : :
5232 : : /*
5233 : : * Allocate subxact information in the logical streaming context. We need
5234 : : * this information during the complete stream so that we can add the sub
5235 : : * transaction info to this. On stream stop we will flush this information
5236 : : * to the subxact file and reset the logical streaming context.
5237 : : */
1881 akapila@postgresql.o 5238 :CBC 79 : oldctx = MemoryContextSwitchTo(LogicalStreamingContext);
5239 : 79 : subxact_data.subxacts = palloc(subxact_data.nsubxacts_max *
5240 : : sizeof(SubXactInfo));
5241 : 79 : MemoryContextSwitchTo(oldctx);
5242 : :
1016 peter@eisentraut.org 5243 [ + - ]: 79 : if (len > 0)
5244 : 79 : BufFileReadExact(fd, subxact_data.subxacts, len);
5245 : :
1881 akapila@postgresql.o 5246 : 79 : BufFileClose(fd);
5247 : : }
5248 : :
5249 : : /*
5250 : : * subxact_info_add
5251 : : * Add information about a subxact (offset in the main file).
5252 : : */
5253 : : static void
5254 : 102512 : subxact_info_add(TransactionId xid)
5255 : : {
5256 : 102512 : SubXactInfo *subxacts = subxact_data.subxacts;
5257 : : int64 i;
5258 : :
5259 : : /* We must have a valid top level stream xid and a stream fd. */
5260 [ - + ]: 102512 : Assert(TransactionIdIsValid(stream_xid));
5261 [ - + ]: 102512 : Assert(stream_fd != NULL);
5262 : :
5263 : : /*
5264 : : * If the XID matches the toplevel transaction, we don't want to add it.
5265 : : */
5266 [ + + ]: 102512 : if (stream_xid == xid)
5267 : 92388 : return;
5268 : :
5269 : : /*
5270 : : * In most cases we're checking the same subxact as we've already seen in
5271 : : * the last call, so make sure to ignore it (this change comes later).
5272 : : */
5273 [ + + ]: 10124 : if (subxact_data.subxact_last == xid)
5274 : 10048 : return;
5275 : :
5276 : : /* OK, remember we're processing this XID. */
5277 : 76 : subxact_data.subxact_last = xid;
5278 : :
5279 : : /*
5280 : : * Check if the transaction is already present in the array of subxact. We
5281 : : * intentionally scan the array from the tail, because we're likely adding
5282 : : * a change for the most recent subtransactions.
5283 : : *
5284 : : * XXX Can we rely on the subxact XIDs arriving in sorted order? That
5285 : : * would allow us to use binary search here.
5286 : : */
5287 [ + + ]: 95 : for (i = subxact_data.nsubxacts; i > 0; i--)
5288 : : {
5289 : : /* found, so we're done */
5290 [ + + ]: 76 : if (subxacts[i - 1].xid == xid)
5291 : 57 : return;
5292 : : }
5293 : :
5294 : : /* This is a new subxact, so we need to add it to the array. */
5295 [ + + ]: 19 : if (subxact_data.nsubxacts == 0)
5296 : : {
5297 : : MemoryContext oldctx;
5298 : :
5299 : 8 : subxact_data.nsubxacts_max = 128;
5300 : :
5301 : : /*
5302 : : * Allocate this memory for subxacts in per-stream context, see
5303 : : * subxact_info_read.
5304 : : */
5305 : 8 : oldctx = MemoryContextSwitchTo(LogicalStreamingContext);
5306 : 8 : subxacts = palloc(subxact_data.nsubxacts_max * sizeof(SubXactInfo));
5307 : 8 : MemoryContextSwitchTo(oldctx);
5308 : : }
5309 [ + + ]: 11 : else if (subxact_data.nsubxacts == subxact_data.nsubxacts_max)
5310 : : {
5311 : 10 : subxact_data.nsubxacts_max *= 2;
5312 : 10 : subxacts = repalloc(subxacts,
5313 : 10 : subxact_data.nsubxacts_max * sizeof(SubXactInfo));
5314 : : }
5315 : :
5316 : 19 : subxacts[subxact_data.nsubxacts].xid = xid;
5317 : :
5318 : : /*
5319 : : * Get the current offset of the stream file and store it as offset of
5320 : : * this subxact.
5321 : : */
5322 : 19 : BufFileTell(stream_fd,
5323 : 19 : &subxacts[subxact_data.nsubxacts].fileno,
5324 : 19 : &subxacts[subxact_data.nsubxacts].offset);
5325 : :
5326 : 19 : subxact_data.nsubxacts++;
5327 : 19 : subxact_data.subxacts = subxacts;
5328 : : }
5329 : :
5330 : : /* format filename for file containing the info about subxacts */
5331 : : static inline void
5332 : 747 : subxact_filename(char *path, Oid subid, TransactionId xid)
5333 : : {
5334 : 747 : snprintf(path, MAXPGPATH, "%u-%u.subxacts", subid, xid);
5335 : 747 : }
5336 : :
5337 : : /* format filename for file containing serialized changes */
5338 : : static inline void
5339 : 438 : changes_filename(char *path, Oid subid, TransactionId xid)
5340 : : {
5341 : 438 : snprintf(path, MAXPGPATH, "%u-%u.changes", subid, xid);
5342 : 438 : }
5343 : :
5344 : : /*
5345 : : * stream_cleanup_files
5346 : : * Cleanup files for a subscription / toplevel transaction.
5347 : : *
5348 : : * Remove files with serialized changes and subxact info for a particular
5349 : : * toplevel transaction. Each subscription has a separate set of files
5350 : : * for any toplevel transaction.
5351 : : */
5352 : : void
5353 : 31 : stream_cleanup_files(Oid subid, TransactionId xid)
5354 : : {
5355 : : char path[MAXPGPATH];
5356 : :
5357 : : /* Delete the changes file. */
5358 : 31 : changes_filename(path, subid, xid);
1517 5359 : 31 : BufFileDeleteFileSet(MyLogicalRepWorker->stream_fileset, path, false);
5360 : :
5361 : : /* Delete the subxact file, if it exists. */
5362 : 31 : subxact_filename(path, subid, xid);
5363 : 31 : BufFileDeleteFileSet(MyLogicalRepWorker->stream_fileset, path, true);
1881 5364 : 31 : }
5365 : :
5366 : : /*
5367 : : * stream_open_file
5368 : : * Open a file that we'll use to serialize changes for a toplevel
5369 : : * transaction.
5370 : : *
5371 : : * Open a file for streamed changes from a toplevel transaction identified
5372 : : * by stream_xid (global variable). If it's the first chunk of streamed
5373 : : * changes for this transaction, create the buffile, otherwise open the
5374 : : * previously created file.
5375 : : */
5376 : : static void
5377 : 363 : stream_open_file(Oid subid, TransactionId xid, bool first_segment)
5378 : : {
5379 : : char path[MAXPGPATH];
5380 : : MemoryContext oldcxt;
5381 : :
5382 [ - + ]: 363 : Assert(OidIsValid(subid));
5383 [ - + ]: 363 : Assert(TransactionIdIsValid(xid));
5384 [ - + ]: 363 : Assert(stream_fd == NULL);
5385 : :
5386 : :
5387 : 363 : changes_filename(path, subid, xid);
5388 [ - + ]: 363 : elog(DEBUG1, "opening file \"%s\" for streamed changes", path);
5389 : :
5390 : : /*
5391 : : * Create/open the buffiles under the logical streaming context so that we
5392 : : * have those files until stream stop.
5393 : : */
5394 : 363 : oldcxt = MemoryContextSwitchTo(LogicalStreamingContext);
5395 : :
5396 : : /*
5397 : : * If this is the first streamed segment, create the changes file.
5398 : : * Otherwise, just open the file for writing, in append mode.
5399 : : */
5400 [ + + ]: 363 : if (first_segment)
1517 5401 : 32 : stream_fd = BufFileCreateFileSet(MyLogicalRepWorker->stream_fileset,
5402 : : path);
5403 : : else
5404 : : {
5405 : : /*
5406 : : * Open the file and seek to the end of the file because we always
5407 : : * append the changes file.
5408 : : */
5409 : 331 : stream_fd = BufFileOpenFileSet(MyLogicalRepWorker->stream_fileset,
5410 : : path, O_RDWR, false);
1881 5411 : 331 : BufFileSeek(stream_fd, 0, 0, SEEK_END);
5412 : : }
5413 : :
5414 : 363 : MemoryContextSwitchTo(oldcxt);
5415 : 363 : }
5416 : :
5417 : : /*
5418 : : * stream_close_file
5419 : : * Close the currently open file with streamed changes.
5420 : : */
5421 : : static void
5422 : 393 : stream_close_file(void)
5423 : : {
5424 [ - + ]: 393 : Assert(stream_fd != NULL);
5425 : :
5426 : 393 : BufFileClose(stream_fd);
5427 : :
5428 : 393 : stream_fd = NULL;
5429 : 393 : }
5430 : :
5431 : : /*
5432 : : * stream_write_change
5433 : : * Serialize a change to a file for the current toplevel transaction.
5434 : : *
5435 : : * The change is serialized in a simple format, with length (not including
5436 : : * the length), action code (identifying the message type) and message
5437 : : * contents (without the subxact TransactionId value).
5438 : : */
5439 : : static void
5440 : 107553 : stream_write_change(char action, StringInfo s)
5441 : : {
5442 : : int len;
5443 : :
5444 [ - + ]: 107553 : Assert(stream_fd != NULL);
5445 : :
5446 : : /* total on-disk size, including the action type character */
5447 : 107553 : len = (s->len - s->cursor) + sizeof(char);
5448 : :
5449 : : /* first write the size */
5450 : 107553 : BufFileWrite(stream_fd, &len, sizeof(len));
5451 : :
5452 : : /* then the action */
5453 : 107553 : BufFileWrite(stream_fd, &action, sizeof(action));
5454 : :
5455 : : /* and finally the remaining part of the buffer (after the XID) */
5456 : 107553 : len = (s->len - s->cursor);
5457 : :
5458 : 107553 : BufFileWrite(stream_fd, &s->data[s->cursor], len);
5459 : 107553 : }
5460 : :
5461 : : /*
5462 : : * stream_open_and_write_change
5463 : : * Serialize a message to a file for the given transaction.
5464 : : *
5465 : : * This function is similar to stream_write_change except that it will open the
5466 : : * target file if not already before writing the message and close the file at
5467 : : * the end.
5468 : : */
5469 : : static void
1023 5470 : 5 : stream_open_and_write_change(TransactionId xid, char action, StringInfo s)
5471 : : {
5472 [ - + ]: 5 : Assert(!in_streamed_transaction);
5473 : :
5474 [ + - ]: 5 : if (!stream_fd)
5475 : 5 : stream_start_internal(xid, false);
5476 : :
5477 : 5 : stream_write_change(action, s);
5478 : 5 : stream_stop_internal(xid);
5479 : 5 : }
5480 : :
5481 : : /*
5482 : : * Sets streaming options including replication slot name and origin start
5483 : : * position. Workers need these options for logical replication.
5484 : : */
5485 : : void
817 5486 : 404 : set_stream_options(WalRcvStreamOptions *options,
5487 : : char *slotname,
5488 : : XLogRecPtr *origin_startpos)
5489 : : {
5490 : : int server_version;
5491 : :
5492 : 404 : options->logical = true;
5493 : 404 : options->startpoint = *origin_startpos;
5494 : 404 : options->slotname = slotname;
5495 : :
5496 : 404 : server_version = walrcv_server_version(LogRepWorkerWalRcvConn);
5497 : 404 : options->proto.logical.proto_version =
5498 [ - + - - : 404 : server_version >= 160000 ? LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM :
- - ]
5499 : : server_version >= 150000 ? LOGICALREP_PROTO_TWOPHASE_VERSION_NUM :
5500 : : server_version >= 140000 ? LOGICALREP_PROTO_STREAM_VERSION_NUM :
5501 : : LOGICALREP_PROTO_VERSION_NUM;
5502 : :
5503 : 404 : options->proto.logical.publication_names = MySubscription->publications;
5504 : 404 : options->proto.logical.binary = MySubscription->binary;
5505 : :
5506 : : /*
5507 : : * Assign the appropriate option value for streaming option according to
5508 : : * the 'streaming' mode and the publisher's ability to support that mode.
5509 : : */
5510 [ + - ]: 404 : if (server_version >= 160000 &&
5511 [ + + ]: 404 : MySubscription->stream == LOGICALREP_STREAM_PARALLEL)
5512 : : {
5513 : 371 : options->proto.logical.streaming_str = "parallel";
5514 : 371 : MyLogicalRepWorker->parallel_apply = true;
5515 : : }
5516 [ + - ]: 33 : else if (server_version >= 140000 &&
5517 [ + + ]: 33 : MySubscription->stream != LOGICALREP_STREAM_OFF)
5518 : : {
5519 : 25 : options->proto.logical.streaming_str = "on";
5520 : 25 : MyLogicalRepWorker->parallel_apply = false;
5521 : : }
5522 : : else
5523 : : {
5524 : 8 : options->proto.logical.streaming_str = NULL;
5525 : 8 : MyLogicalRepWorker->parallel_apply = false;
5526 : : }
5527 : :
5528 : 404 : options->proto.logical.twophase = false;
5529 : 404 : options->proto.logical.origin = pstrdup(MySubscription->origin);
5530 : 404 : }
5531 : :
5532 : : /*
5533 : : * Cleanup the memory for subxacts and reset the related variables.
5534 : : */
5535 : : static inline void
1881 5536 : 376 : cleanup_subxact_info()
5537 : : {
5538 [ + + ]: 376 : if (subxact_data.subxacts)
5539 : 87 : pfree(subxact_data.subxacts);
5540 : :
5541 : 376 : subxact_data.subxacts = NULL;
5542 : 376 : subxact_data.subxact_last = InvalidTransactionId;
5543 : 376 : subxact_data.nsubxacts = 0;
5544 : 376 : subxact_data.nsubxacts_max = 0;
5545 : 376 : }
5546 : :
5547 : : /*
5548 : : * Common function to run the apply loop with error handling. Disable the
5549 : : * subscription, if necessary.
5550 : : *
5551 : : * Note that we don't handle FATAL errors which are probably because
5552 : : * of system resource error and are not repeatable.
5553 : : */
5554 : : void
817 5555 : 404 : start_apply(XLogRecPtr origin_startpos)
5556 : : {
1324 5557 [ + + ]: 404 : PG_TRY();
5558 : : {
817 5559 : 404 : LogicalRepApplyLoop(origin_startpos);
5560 : : }
1324 5561 : 78 : PG_CATCH();
5562 : : {
5563 : : /*
5564 : : * Reset the origin state to prevent the advancement of origin
5565 : : * progress if we fail to apply. Otherwise, this will result in
5566 : : * transaction loss as that transaction won't be sent again by the
5567 : : * server.
5568 : : */
188 5569 : 78 : replorigin_reset(0, (Datum) 0);
5570 : :
1324 5571 [ + + ]: 78 : if (MySubscription->disableonerr)
5572 : 3 : DisableSubscriptionAndExit();
5573 : : else
5574 : : {
5575 : : /*
5576 : : * Report the worker failed while applying changes. Abort the
5577 : : * current transaction so that the stats message is sent in an
5578 : : * idle state.
5579 : : */
5580 : 75 : AbortOutOfAnyTransaction();
817 5581 : 75 : pgstat_report_subscription_error(MySubscription->oid, !am_tablesync_worker());
5582 : :
1324 5583 : 75 : PG_RE_THROW();
5584 : : }
5585 : : }
1324 akapila@postgresql.o 5586 [ # # ]:UBC 0 : PG_END_TRY();
5587 : 0 : }
5588 : :
5589 : : /*
5590 : : * Runs the leader apply worker.
5591 : : *
5592 : : * It sets up replication origin, streaming options and then starts streaming.
5593 : : */
5594 : : static void
817 akapila@postgresql.o 5595 :CBC 259 : run_apply_worker()
5596 : : {
5597 : : char originname[NAMEDATALEN];
5598 : 259 : XLogRecPtr origin_startpos = InvalidXLogRecPtr;
5599 : 259 : char *slotname = NULL;
5600 : : WalRcvStreamOptions options;
5601 : : RepOriginId originid;
5602 : : TimeLineID startpointTLI;
5603 : : char *err;
5604 : : bool must_use_password;
5605 : :
5606 : 259 : slotname = MySubscription->slotname;
5607 : :
5608 : : /*
5609 : : * This shouldn't happen if the subscription is enabled, but guard against
5610 : : * DDL bugs or manual catalog changes. (libpqwalreceiver will crash if
5611 : : * slot is NULL.)
5612 : : */
5613 [ - + ]: 259 : if (!slotname)
817 akapila@postgresql.o 5614 [ # # ]:UBC 0 : ereport(ERROR,
5615 : : (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
5616 : : errmsg("subscription has no replication slot set")));
5617 : :
5618 : : /* Setup replication origin tracking. */
817 akapila@postgresql.o 5619 :CBC 259 : ReplicationOriginNameForLogicalRep(MySubscription->oid, InvalidOid,
5620 : : originname, sizeof(originname));
5621 : 259 : StartTransactionCommand();
5622 : 259 : originid = replorigin_by_name(originname, true);
5623 [ - + ]: 259 : if (!OidIsValid(originid))
817 akapila@postgresql.o 5624 :UBC 0 : originid = replorigin_create(originname);
817 akapila@postgresql.o 5625 :CBC 259 : replorigin_session_setup(originid, 0);
5626 : 259 : replorigin_session_origin = originid;
5627 : 259 : origin_startpos = replorigin_session_get_progress(false);
742 5628 : 259 : CommitTransactionCommand();
5629 : :
5630 : : /* Is the use of a password mandatory? */
817 5631 [ + + ]: 497 : must_use_password = MySubscription->passwordrequired &&
742 5632 [ + + ]: 238 : !MySubscription->ownersuperuser;
5633 : :
817 5634 : 259 : LogRepWorkerWalRcvConn = walrcv_connect(MySubscription->conninfo, true,
5635 : : true, must_use_password,
5636 : : MySubscription->name, &err);
5637 : :
5638 [ + + ]: 251 : if (LogRepWorkerWalRcvConn == NULL)
5639 [ + - ]: 31 : ereport(ERROR,
5640 : : (errcode(ERRCODE_CONNECTION_FAILURE),
5641 : : errmsg("apply worker for subscription \"%s\" could not connect to the publisher: %s",
5642 : : MySubscription->name, err)));
5643 : :
5644 : : /*
5645 : : * We don't really use the output identify_system for anything but it does
5646 : : * some initializations on the upstream so let's still call it.
5647 : : */
5648 : 220 : (void) walrcv_identify_system(LogRepWorkerWalRcvConn, &startpointTLI);
5649 : :
5650 : 220 : set_apply_error_context_origin(originname);
5651 : :
5652 : 220 : set_stream_options(&options, slotname, &origin_startpos);
5653 : :
5654 : : /*
5655 : : * Even when the two_phase mode is requested by the user, it remains as
5656 : : * the tri-state PENDING until all tablesyncs have reached READY state.
5657 : : * Only then, can it become ENABLED.
5658 : : *
5659 : : * Note: If the subscription has no tables then leave the state as
5660 : : * PENDING, which allows ALTER SUBSCRIPTION ... REFRESH PUBLICATION to
5661 : : * work.
5662 : : */
5663 [ + + + + ]: 235 : if (MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING &&
5664 : 15 : AllTablesyncsReady())
5665 : : {
5666 : : /* Start streaming with two_phase enabled */
5667 : 9 : options.proto.logical.twophase = true;
5668 : 9 : walrcv_startstreaming(LogRepWorkerWalRcvConn, &options);
5669 : :
5670 : 9 : StartTransactionCommand();
5671 : :
5672 : : /*
5673 : : * Updating pg_subscription might involve TOAST table access, so
5674 : : * ensure we have a valid snapshot.
5675 : : */
151 nathan@postgresql.or 5676 : 9 : PushActiveSnapshot(GetTransactionSnapshot());
5677 : :
817 akapila@postgresql.o 5678 : 9 : UpdateTwoPhaseState(MySubscription->oid, LOGICALREP_TWOPHASE_STATE_ENABLED);
5679 : 9 : MySubscription->twophasestate = LOGICALREP_TWOPHASE_STATE_ENABLED;
151 nathan@postgresql.or 5680 : 9 : PopActiveSnapshot();
817 akapila@postgresql.o 5681 : 9 : CommitTransactionCommand();
5682 : : }
5683 : : else
5684 : : {
5685 : 211 : walrcv_startstreaming(LogRepWorkerWalRcvConn, &options);
5686 : : }
5687 : :
5688 [ + + + + : 220 : ereport(DEBUG1,
+ - + - ]
5689 : : (errmsg_internal("logical replication apply worker for subscription \"%s\" two_phase is %s",
5690 : : MySubscription->name,
5691 : : MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_DISABLED ? "DISABLED" :
5692 : : MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_PENDING ? "PENDING" :
5693 : : MySubscription->twophasestate == LOGICALREP_TWOPHASE_STATE_ENABLED ? "ENABLED" :
5694 : : "?")));
5695 : :
5696 : : /* Run the main loop. */
5697 : 220 : start_apply(origin_startpos);
1324 akapila@postgresql.o 5698 :UBC 0 : }
5699 : :
5700 : : /*
5701 : : * Common initialization for leader apply worker, parallel apply worker and
5702 : : * tablesync worker.
5703 : : *
5704 : : * Initialize the database connection, in-memory subscription and necessary
5705 : : * config options.
5706 : : */
5707 : : void
817 akapila@postgresql.o 5708 :CBC 529 : InitializeLogRepWorker(void)
5709 : : {
5710 : : MemoryContext oldctx;
5711 : :
5712 : : /* Run as replica session replication role. */
3204 peter_e@gmx.net 5713 : 529 : SetConfigOption("session_replication_role", "replica",
5714 : : PGC_SUSET, PGC_S_OVERRIDE);
5715 : :
5716 : : /* Connect to our database. */
5717 : 529 : BackgroundWorkerInitializeConnectionByOid(MyLogicalRepWorker->dbid,
2763 magnus@hagander.net 5718 : 529 : MyLogicalRepWorker->userid,
5719 : : 0);
5720 : :
5721 : : /*
5722 : : * Set always-secure search path, so malicious users can't redirect user
5723 : : * code (e.g. pg_index.indexprs).
5724 : : */
1905 noah@leadboat.com 5725 : 525 : SetConfigOption("search_path", "", PGC_SUSET, PGC_S_OVERRIDE);
5726 : :
5727 : : /* Load the subscription into persistent memory context. */
3094 peter_e@gmx.net 5728 : 525 : ApplyContext = AllocSetContextCreate(TopMemoryContext,
5729 : : "ApplyContext",
5730 : : ALLOCSET_DEFAULT_SIZES);
3204 5731 : 525 : StartTransactionCommand();
3094 5732 : 525 : oldctx = MemoryContextSwitchTo(ApplyContext);
5733 : :
5734 : : /*
5735 : : * Lock the subscription to prevent it from being concurrently dropped,
5736 : : * then re-verify its existence. After the initialization, the worker will
5737 : : * be terminated gracefully if the subscription is dropped.
5738 : : */
70 akapila@postgresql.o 5739 : 525 : LockSharedObject(SubscriptionRelationId, MyLogicalRepWorker->subid, 0,
5740 : : AccessShareLock);
2762 peter_e@gmx.net 5741 : 523 : MySubscription = GetSubscription(MyLogicalRepWorker->subid, true);
5742 [ + + ]: 523 : if (!MySubscription)
5743 : : {
5744 [ + - ]: 59 : ereport(LOG,
5745 : : (errmsg("logical replication worker for subscription %u will not start because the subscription was removed during startup",
5746 : : MyLogicalRepWorker->subid)));
5747 : :
5748 : : /* Ensure we remove no-longer-useful entry for worker's start time */
816 akapila@postgresql.o 5749 [ + - ]: 59 : if (am_leader_apply_worker())
1010 tgl@sss.pgh.pa.us 5750 : 59 : ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);
5751 : :
2762 peter_e@gmx.net 5752 : 59 : proc_exit(0);
5753 : : }
5754 : :
3204 5755 : 464 : MySubscriptionValid = true;
5756 : 464 : MemoryContextSwitchTo(oldctx);
5757 : :
5758 [ - + ]: 464 : if (!MySubscription->enabled)
5759 : : {
3204 peter_e@gmx.net 5760 [ # # ]:UBC 0 : ereport(LOG,
5761 : : (errmsg("logical replication worker for subscription \"%s\" will not start because the subscription was disabled during startup",
5762 : : MySubscription->name)));
5763 : :
1023 akapila@postgresql.o 5764 : 0 : apply_worker_exit();
5765 : : }
5766 : :
5767 : : /*
5768 : : * Restart the worker if retain_dead_tuples was enabled during startup.
5769 : : *
5770 : : * At this point, the replication slot used for conflict detection might
5771 : : * not exist yet, or could be dropped soon if the launcher perceives
5772 : : * retain_dead_tuples as disabled. To avoid unnecessary tracking of
5773 : : * oldest_nonremovable_xid when the slot is absent or at risk of being
5774 : : * dropped, a restart is initiated.
5775 : : *
5776 : : * The oldest_nonremovable_xid should be initialized only when the
5777 : : * subscription's retention is active before launching the worker. See
5778 : : * logicalrep_worker_launch.
5779 : : */
97 akapila@postgresql.o 5780 [ + + ]:GNC 464 : if (am_leader_apply_worker() &&
5781 [ + + ]: 259 : MySubscription->retaindeadtuples &&
56 5782 [ + - ]: 11 : MySubscription->retentionactive &&
97 5783 [ - + ]: 11 : !TransactionIdIsValid(MyLogicalRepWorker->oldest_nonremovable_xid))
5784 : : {
97 akapila@postgresql.o 5785 [ # # ]:UNC 0 : ereport(LOG,
5786 : : errmsg("logical replication worker for subscription \"%s\" will restart because the option %s was enabled during startup",
5787 : : MySubscription->name, "retain_dead_tuples"));
5788 : :
5789 : 0 : apply_worker_exit();
5790 : : }
5791 : :
5792 : : /* Setup synchronous commit according to the user's wishes */
2762 peter_e@gmx.net 5793 :CBC 464 : SetConfigOption("synchronous_commit", MySubscription->synccommit,
5794 : : PGC_BACKEND, PGC_S_OVERRIDE);
5795 : :
5796 : : /*
5797 : : * Keep us informed about subscription or role changes. Note that the
5798 : : * role's superuser privilege can be revoked.
5799 : : */
3204 5800 : 464 : CacheRegisterSyscacheCallback(SUBSCRIPTIONOID,
5801 : : subscription_change_cb,
5802 : : (Datum) 0);
5803 : :
742 akapila@postgresql.o 5804 : 464 : CacheRegisterSyscacheCallback(AUTHOID,
5805 : : subscription_change_cb,
5806 : : (Datum) 0);
5807 : :
3141 peter_e@gmx.net 5808 [ + + ]: 464 : if (am_tablesync_worker())
3079 5809 [ + - ]: 195 : ereport(LOG,
5810 : : (errmsg("logical replication table synchronization worker for subscription \"%s\", table \"%s\" has started",
5811 : : MySubscription->name,
5812 : : get_rel_name(MyLogicalRepWorker->relid))));
5813 : : else
5814 [ + - ]: 269 : ereport(LOG,
5815 : : (errmsg("logical replication apply worker for subscription \"%s\" has started",
5816 : : MySubscription->name)));
5817 : :
3204 5818 : 464 : CommitTransactionCommand();
1023 akapila@postgresql.o 5819 : 464 : }
5820 : :
5821 : : /*
5822 : : * Reset the origin state.
5823 : : */
5824 : : static void
433 5825 : 532 : replorigin_reset(int code, Datum arg)
5826 : : {
5827 : 532 : replorigin_session_origin = InvalidRepOriginId;
5828 : 532 : replorigin_session_origin_lsn = InvalidXLogRecPtr;
5829 : 532 : replorigin_session_origin_timestamp = 0;
5830 : 532 : }
5831 : :
5832 : : /* Common function to setup the leader apply or tablesync worker. */
5833 : : void
817 5834 : 519 : SetupApplyOrSyncWorker(int worker_slot)
5835 : : {
5836 : : /* Attach to slot */
1023 5837 : 519 : logicalrep_worker_attach(worker_slot);
5838 : :
817 5839 [ + + - + ]: 519 : Assert(am_tablesync_worker() || am_leader_apply_worker());
5840 : :
5841 : : /* Setup signal handling */
1023 5842 : 519 : pqsignal(SIGHUP, SignalHandlerForConfigReload);
5843 : 519 : pqsignal(SIGTERM, die);
5844 : 519 : BackgroundWorkerUnblockSignals();
5845 : :
5846 : : /*
5847 : : * We don't currently need any ResourceOwner in a walreceiver process, but
5848 : : * if we did, we could call CreateAuxProcessResourceOwner here.
5849 : : */
5850 : :
5851 : : /* Initialise stats to a sanish value */
5852 : 519 : MyLogicalRepWorker->last_send_time = MyLogicalRepWorker->last_recv_time =
5853 : 519 : MyLogicalRepWorker->reply_time = GetCurrentTimestamp();
5854 : :
5855 : : /* Load the libpq-specific functions */
5856 : 519 : load_file("libpqwalreceiver", false);
5857 : :
817 5858 : 519 : InitializeLogRepWorker();
5859 : :
5860 : : /*
5861 : : * Register a callback to reset the origin state before aborting any
5862 : : * pending transaction during shutdown (see ShutdownPostgres()). This will
5863 : : * avoid origin advancement for an in-complete transaction which could
5864 : : * otherwise lead to its loss as such a transaction won't be sent by the
5865 : : * server again.
5866 : : *
5867 : : * Note that even a LOG or DEBUG statement placed after setting the origin
5868 : : * state may process a shutdown signal before committing the current apply
5869 : : * operation. So, it is important to register such a callback here.
5870 : : */
433 5871 : 454 : before_shmem_exit(replorigin_reset, (Datum) 0);
5872 : :
5873 : : /* Connect to the origin and start the replication. */
3204 peter_e@gmx.net 5874 [ + + ]: 454 : elog(DEBUG1, "connecting to publisher using connection string \"%s\"",
5875 : : MySubscription->conninfo);
5876 : :
5877 : : /*
5878 : : * Setup callback for syscache so that we know when something changes in
5879 : : * the subscription relation state.
5880 : : */
3141 5881 : 454 : CacheRegisterSyscacheCallback(SUBSCRIPTIONRELMAP,
5882 : : InvalidateSyncingRelStates,
5883 : : (Datum) 0);
817 akapila@postgresql.o 5884 : 454 : }
5885 : :
5886 : : /* Logical Replication Apply worker entry point */
5887 : : void
5888 : 321 : ApplyWorkerMain(Datum main_arg)
5889 : : {
5890 : 321 : int worker_slot = DatumGetInt32(main_arg);
5891 : :
5892 : 321 : InitializingApplyWorker = true;
5893 : :
5894 : 321 : SetupApplyOrSyncWorker(worker_slot);
5895 : :
5896 : 259 : InitializingApplyWorker = false;
5897 : :
5898 : 259 : run_apply_worker();
5899 : :
1324 akapila@postgresql.o 5900 :UBC 0 : proc_exit(0);
5901 : : }
5902 : :
5903 : : /*
5904 : : * After error recovery, disable the subscription in a new transaction
5905 : : * and exit cleanly.
5906 : : */
5907 : : void
1324 akapila@postgresql.o 5908 :CBC 4 : DisableSubscriptionAndExit(void)
5909 : : {
5910 : : /*
5911 : : * Emit the error message, and recover from the error state to an idle
5912 : : * state
5913 : : */
5914 : 4 : HOLD_INTERRUPTS();
5915 : :
5916 : 4 : EmitErrorReport();
5917 : 4 : AbortOutOfAnyTransaction();
5918 : 4 : FlushErrorState();
5919 : :
5920 [ - + ]: 4 : RESUME_INTERRUPTS();
5921 : :
5922 : : /* Report the worker failed during either table synchronization or apply */
5923 : 4 : pgstat_report_subscription_error(MyLogicalRepWorker->subid,
5924 : 4 : !am_tablesync_worker());
5925 : :
5926 : : /* Disable the subscription */
5927 : 4 : StartTransactionCommand();
5928 : :
5929 : : /*
5930 : : * Updating pg_subscription might involve TOAST table access, so ensure we
5931 : : * have a valid snapshot.
5932 : : */
151 nathan@postgresql.or 5933 : 4 : PushActiveSnapshot(GetTransactionSnapshot());
5934 : :
1324 akapila@postgresql.o 5935 : 4 : DisableSubscription(MySubscription->oid);
151 nathan@postgresql.or 5936 : 4 : PopActiveSnapshot();
1324 akapila@postgresql.o 5937 : 4 : CommitTransactionCommand();
5938 : :
5939 : : /* Ensure we remove no-longer-useful entry for worker's start time */
816 5940 [ + + ]: 4 : if (am_leader_apply_worker())
1010 tgl@sss.pgh.pa.us 5941 : 3 : ApplyLauncherForgetWorkerStartTime(MyLogicalRepWorker->subid);
5942 : :
5943 : : /* Notify the subscription has been disabled and exit */
1324 akapila@postgresql.o 5944 [ + - ]: 4 : ereport(LOG,
5945 : : errmsg("subscription \"%s\" has been disabled because of an error",
5946 : : MySubscription->name));
5947 : :
5948 : : /*
5949 : : * Skip the track_commit_timestamp check when disabling the worker due to
5950 : : * an error, as verifying commit timestamps is unnecessary in this
5951 : : * context.
5952 : : */
56 akapila@postgresql.o 5953 :GNC 4 : CheckSubDeadTupleRetention(false, true, WARNING,
5954 : 4 : MySubscription->retaindeadtuples,
5955 : 4 : MySubscription->retentionactive, false);
5956 : :
3204 peter_e@gmx.net 5957 :CBC 4 : proc_exit(0);
5958 : : }
5959 : :
5960 : : /*
5961 : : * Is current process a logical replication worker?
5962 : : */
5963 : : bool
3070 5964 : 2006 : IsLogicalWorker(void)
5965 : : {
5966 : 2006 : return MyLogicalRepWorker != NULL;
5967 : : }
5968 : :
5969 : : /*
5970 : : * Is current process a logical replication parallel apply worker?
5971 : : */
5972 : : bool
1023 akapila@postgresql.o 5973 : 1398 : IsLogicalParallelApplyWorker(void)
5974 : : {
5975 [ + + + - ]: 1398 : return IsLogicalWorker() && am_parallel_apply_worker();
5976 : : }
5977 : :
5978 : : /*
5979 : : * Start skipping changes of the transaction if the given LSN matches the
5980 : : * LSN specified by subscription's skiplsn.
5981 : : */
5982 : : static void
1316 5983 : 537 : maybe_start_skipping_changes(XLogRecPtr finish_lsn)
5984 : : {
5985 [ - + ]: 537 : Assert(!is_skipping_changes());
5986 [ - + ]: 537 : Assert(!in_remote_transaction);
5987 [ - + ]: 537 : Assert(!in_streamed_transaction);
5988 : :
5989 : : /*
5990 : : * Quick return if it's not requested to skip this transaction. This
5991 : : * function is called for every remote transaction and we assume that
5992 : : * skipping the transaction is not used often.
5993 : : */
5994 [ + + - + : 537 : if (likely(XLogRecPtrIsInvalid(MySubscription->skiplsn) ||
+ + ]
5995 : : MySubscription->skiplsn != finish_lsn))
5996 : 534 : return;
5997 : :
5998 : : /* Start skipping all changes of this transaction */
5999 : 3 : skip_xact_finish_lsn = finish_lsn;
6000 : :
6001 [ + - ]: 3 : ereport(LOG,
6002 : : errmsg("logical replication starts skipping transaction at LSN %X/%08X",
6003 : : LSN_FORMAT_ARGS(skip_xact_finish_lsn)));
6004 : : }
6005 : :
6006 : : /*
6007 : : * Stop skipping changes by resetting skip_xact_finish_lsn if enabled.
6008 : : */
6009 : : static void
6010 : 27 : stop_skipping_changes(void)
6011 : : {
6012 [ + + ]: 27 : if (!is_skipping_changes())
6013 : 24 : return;
6014 : :
6015 [ + - ]: 3 : ereport(LOG,
6016 : : errmsg("logical replication completed skipping transaction at LSN %X/%08X",
6017 : : LSN_FORMAT_ARGS(skip_xact_finish_lsn)));
6018 : :
6019 : : /* Stop skipping changes */
6020 : 3 : skip_xact_finish_lsn = InvalidXLogRecPtr;
6021 : : }
6022 : :
6023 : : /*
6024 : : * Clear subskiplsn of pg_subscription catalog.
6025 : : *
6026 : : * finish_lsn is the transaction's finish LSN that is used to check if the
6027 : : * subskiplsn matches it. If not matched, we raise a warning when clearing the
6028 : : * subskiplsn in order to inform users for cases e.g., where the user mistakenly
6029 : : * specified the wrong subskiplsn.
6030 : : */
6031 : : static void
6032 : 535 : clear_subscription_skip_lsn(XLogRecPtr finish_lsn)
6033 : : {
6034 : : Relation rel;
6035 : : Form_pg_subscription subform;
6036 : : HeapTuple tup;
6037 : 535 : XLogRecPtr myskiplsn = MySubscription->skiplsn;
6038 : 535 : bool started_tx = false;
6039 : :
1023 6040 [ + + - + ]: 535 : if (likely(XLogRecPtrIsInvalid(myskiplsn)) || am_parallel_apply_worker())
1316 6041 : 532 : return;
6042 : :
6043 [ + + ]: 3 : if (!IsTransactionState())
6044 : : {
6045 : 1 : StartTransactionCommand();
6046 : 1 : started_tx = true;
6047 : : }
6048 : :
6049 : : /*
6050 : : * Updating pg_subscription might involve TOAST table access, so ensure we
6051 : : * have a valid snapshot.
6052 : : */
151 nathan@postgresql.or 6053 : 3 : PushActiveSnapshot(GetTransactionSnapshot());
6054 : :
6055 : : /*
6056 : : * Protect subskiplsn of pg_subscription from being concurrently updated
6057 : : * while clearing it.
6058 : : */
1316 akapila@postgresql.o 6059 : 3 : LockSharedObject(SubscriptionRelationId, MySubscription->oid, 0,
6060 : : AccessShareLock);
6061 : :
6062 : 3 : rel = table_open(SubscriptionRelationId, RowExclusiveLock);
6063 : :
6064 : : /* Fetch the existing tuple. */
6065 : 3 : tup = SearchSysCacheCopy1(SUBSCRIPTIONOID,
6066 : : ObjectIdGetDatum(MySubscription->oid));
6067 : :
6068 [ - + ]: 3 : if (!HeapTupleIsValid(tup))
1316 akapila@postgresql.o 6069 [ # # ]:UBC 0 : elog(ERROR, "subscription \"%s\" does not exist", MySubscription->name);
6070 : :
1316 akapila@postgresql.o 6071 :CBC 3 : subform = (Form_pg_subscription) GETSTRUCT(tup);
6072 : :
6073 : : /*
6074 : : * Clear the subskiplsn. If the user has already changed subskiplsn before
6075 : : * clearing it we don't update the catalog and the replication origin
6076 : : * state won't get advanced. So in the worst case, if the server crashes
6077 : : * before sending an acknowledgment of the flush position the transaction
6078 : : * will be sent again and the user needs to set subskiplsn again. We can
6079 : : * reduce the possibility by logging a replication origin WAL record to
6080 : : * advance the origin LSN instead but there is no way to advance the
6081 : : * origin timestamp and it doesn't seem to be worth doing anything about
6082 : : * it since it's a very rare case.
6083 : : */
6084 [ + - ]: 3 : if (subform->subskiplsn == myskiplsn)
6085 : : {
6086 : : bool nulls[Natts_pg_subscription];
6087 : : bool replaces[Natts_pg_subscription];
6088 : : Datum values[Natts_pg_subscription];
6089 : :
6090 : 3 : memset(values, 0, sizeof(values));
6091 : 3 : memset(nulls, false, sizeof(nulls));
6092 : 3 : memset(replaces, false, sizeof(replaces));
6093 : :
6094 : : /* reset subskiplsn */
6095 : 3 : values[Anum_pg_subscription_subskiplsn - 1] = LSNGetDatum(InvalidXLogRecPtr);
6096 : 3 : replaces[Anum_pg_subscription_subskiplsn - 1] = true;
6097 : :
6098 : 3 : tup = heap_modify_tuple(tup, RelationGetDescr(rel), values, nulls,
6099 : : replaces);
6100 : 3 : CatalogTupleUpdate(rel, &tup->t_self, tup);
6101 : :
6102 [ - + ]: 3 : if (myskiplsn != finish_lsn)
1316 akapila@postgresql.o 6103 [ # # ]:UBC 0 : ereport(WARNING,
6104 : : errmsg("skip-LSN of subscription \"%s\" cleared", MySubscription->name),
6105 : : errdetail("Remote transaction's finish WAL location (LSN) %X/%08X did not match skip-LSN %X/%08X.",
6106 : : LSN_FORMAT_ARGS(finish_lsn),
6107 : : LSN_FORMAT_ARGS(myskiplsn)));
6108 : : }
6109 : :
1316 akapila@postgresql.o 6110 :CBC 3 : heap_freetuple(tup);
6111 : 3 : table_close(rel, NoLock);
6112 : :
151 nathan@postgresql.or 6113 : 3 : PopActiveSnapshot();
6114 : :
1316 akapila@postgresql.o 6115 [ + + ]: 3 : if (started_tx)
6116 : 1 : CommitTransactionCommand();
6117 : : }
6118 : :
6119 : : /* Error callback to give more context info about the change being applied */
6120 : : void
1523 6121 : 1036 : apply_error_callback(void *arg)
6122 : : {
6123 : 1036 : ApplyErrorCallbackArg *errarg = &apply_error_callback_arg;
6124 : :
6125 [ + + ]: 1036 : if (apply_error_callback_arg.command == 0)
6126 : 525 : return;
6127 : :
1330 6128 [ - + ]: 511 : Assert(errarg->origin_name);
6129 : :
1331 6130 [ + + ]: 511 : if (errarg->rel == NULL)
6131 : : {
6132 [ - + ]: 342 : if (!TransactionIdIsValid(errarg->remote_xid))
1130 peter@eisentraut.org 6133 :UBC 0 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\"",
6134 : : errarg->origin_name,
6135 : : logicalrep_message_type(errarg->command));
1330 akapila@postgresql.o 6136 [ + + ]:CBC 342 : else if (XLogRecPtrIsInvalid(errarg->finish_lsn))
1130 peter@eisentraut.org 6137 : 269 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" in transaction %u",
6138 : : errarg->origin_name,
6139 : : logicalrep_message_type(errarg->command),
6140 : : errarg->remote_xid);
6141 : : else
113 alvherre@kurilemu.de 6142 :GNC 146 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" in transaction %u, finished at %X/%08X",
6143 : : errarg->origin_name,
6144 : : logicalrep_message_type(errarg->command),
6145 : : errarg->remote_xid,
1330 akapila@postgresql.o 6146 :CBC 73 : LSN_FORMAT_ARGS(errarg->finish_lsn));
6147 : : }
6148 : : else
6149 : : {
1023 6150 [ + - ]: 169 : if (errarg->remote_attnum < 0)
6151 : : {
6152 [ + + ]: 169 : if (XLogRecPtrIsInvalid(errarg->finish_lsn))
1023 akapila@postgresql.o 6153 :GBC 174 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" for replication target relation \"%s.%s\" in transaction %u",
6154 : : errarg->origin_name,
6155 : : logicalrep_message_type(errarg->command),
6156 : 87 : errarg->rel->remoterel.nspname,
6157 : 87 : errarg->rel->remoterel.relname,
6158 : : errarg->remote_xid);
6159 : : else
113 alvherre@kurilemu.de 6160 :GNC 164 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" for replication target relation \"%s.%s\" in transaction %u, finished at %X/%08X",
6161 : : errarg->origin_name,
6162 : : logicalrep_message_type(errarg->command),
1023 akapila@postgresql.o 6163 :CBC 82 : errarg->rel->remoterel.nspname,
6164 : 82 : errarg->rel->remoterel.relname,
6165 : : errarg->remote_xid,
6166 : 82 : LSN_FORMAT_ARGS(errarg->finish_lsn));
6167 : : }
6168 : : else
6169 : : {
1023 akapila@postgresql.o 6170 [ # # ]:UBC 0 : if (XLogRecPtrIsInvalid(errarg->finish_lsn))
6171 : 0 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" for replication target relation \"%s.%s\" column \"%s\" in transaction %u",
6172 : : errarg->origin_name,
6173 : : logicalrep_message_type(errarg->command),
6174 : 0 : errarg->rel->remoterel.nspname,
6175 : 0 : errarg->rel->remoterel.relname,
6176 : 0 : errarg->rel->remoterel.attnames[errarg->remote_attnum],
6177 : : errarg->remote_xid);
6178 : : else
113 alvherre@kurilemu.de 6179 :UNC 0 : errcontext("processing remote data for replication origin \"%s\" during message type \"%s\" for replication target relation \"%s.%s\" column \"%s\" in transaction %u, finished at %X/%08X",
6180 : : errarg->origin_name,
6181 : : logicalrep_message_type(errarg->command),
1023 akapila@postgresql.o 6182 :UBC 0 : errarg->rel->remoterel.nspname,
6183 : 0 : errarg->rel->remoterel.relname,
6184 : 0 : errarg->rel->remoterel.attnames[errarg->remote_attnum],
6185 : : errarg->remote_xid,
6186 : 0 : LSN_FORMAT_ARGS(errarg->finish_lsn));
6187 : : }
6188 : : }
6189 : : }
6190 : :
6191 : : /* Set transaction information of apply error callback */
6192 : : static inline void
1330 akapila@postgresql.o 6193 :CBC 2948 : set_apply_error_context_xact(TransactionId xid, XLogRecPtr lsn)
6194 : : {
1523 6195 : 2948 : apply_error_callback_arg.remote_xid = xid;
1330 6196 : 2948 : apply_error_callback_arg.finish_lsn = lsn;
1523 6197 : 2948 : }
6198 : :
6199 : : /* Reset all information of apply error callback */
6200 : : static inline void
6201 : 1446 : reset_apply_error_context_info(void)
6202 : : {
6203 : 1446 : apply_error_callback_arg.command = 0;
6204 : 1446 : apply_error_callback_arg.rel = NULL;
6205 : 1446 : apply_error_callback_arg.remote_attnum = -1;
1330 6206 : 1446 : set_apply_error_context_xact(InvalidTransactionId, InvalidXLogRecPtr);
1523 6207 : 1446 : }
6208 : :
6209 : : /*
6210 : : * Request wakeup of the workers for the given subscription OID
6211 : : * at commit of the current transaction.
6212 : : *
6213 : : * This is used to ensure that the workers process assorted changes
6214 : : * as soon as possible.
6215 : : */
6216 : : void
1026 tgl@sss.pgh.pa.us 6217 : 218 : LogicalRepWorkersWakeupAtCommit(Oid subid)
6218 : : {
6219 : : MemoryContext oldcxt;
6220 : :
6221 : 218 : oldcxt = MemoryContextSwitchTo(TopTransactionContext);
6222 : 218 : on_commit_wakeup_workers_subids =
6223 : 218 : list_append_unique_oid(on_commit_wakeup_workers_subids, subid);
6224 : 218 : MemoryContextSwitchTo(oldcxt);
6225 : 218 : }
6226 : :
6227 : : /*
6228 : : * Wake up the workers of any subscriptions that were changed in this xact.
6229 : : */
6230 : : void
6231 : 321818 : AtEOXact_LogicalRepWorkers(bool isCommit)
6232 : : {
6233 [ + + + + ]: 321818 : if (isCommit && on_commit_wakeup_workers_subids != NIL)
6234 : : {
6235 : : ListCell *lc;
6236 : :
6237 : 213 : LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);
6238 [ + - + + : 426 : foreach(lc, on_commit_wakeup_workers_subids)
+ + ]
6239 : : {
6240 : 213 : Oid subid = lfirst_oid(lc);
6241 : : List *workers;
6242 : : ListCell *lc2;
6243 : :
461 akapila@postgresql.o 6244 : 213 : workers = logicalrep_workers_find(subid, true, false);
1026 tgl@sss.pgh.pa.us 6245 [ + + + + : 282 : foreach(lc2, workers)
+ + ]
6246 : : {
6247 : 69 : LogicalRepWorker *worker = (LogicalRepWorker *) lfirst(lc2);
6248 : :
6249 : 69 : logicalrep_worker_wakeup_ptr(worker);
6250 : : }
6251 : : }
6252 : 213 : LWLockRelease(LogicalRepWorkerLock);
6253 : : }
6254 : :
6255 : : /* The List storage will be reclaimed automatically in xact cleanup. */
6256 : 321818 : on_commit_wakeup_workers_subids = NIL;
6257 : 321818 : }
6258 : :
6259 : : /*
6260 : : * Allocate the origin name in long-lived context for error context message.
6261 : : */
6262 : : void
1023 akapila@postgresql.o 6263 : 414 : set_apply_error_context_origin(char *originname)
6264 : : {
6265 : 414 : apply_error_callback_arg.origin_name = MemoryContextStrdup(ApplyContext,
6266 : : originname);
6267 : 414 : }
6268 : :
6269 : : /*
6270 : : * Return the action to be taken for the given transaction. See
6271 : : * TransApplyAction for information on each of the actions.
6272 : : *
6273 : : * *winfo is assigned to the destination parallel worker info when the leader
6274 : : * apply worker has to pass all the transaction's changes to the parallel
6275 : : * apply worker.
6276 : : */
6277 : : static TransApplyAction
6278 : 326342 : get_transaction_apply_action(TransactionId xid, ParallelApplyWorkerInfo **winfo)
6279 : : {
6280 : 326342 : *winfo = NULL;
6281 : :
6282 [ + + ]: 326342 : if (am_parallel_apply_worker())
6283 : : {
6284 : 68988 : return TRANS_PARALLEL_APPLY;
6285 : : }
6286 : :
6287 : : /*
6288 : : * If we are processing this transaction using a parallel apply worker
6289 : : * then either we send the changes to the parallel worker or if the worker
6290 : : * is busy then serialize the changes to the file which will later be
6291 : : * processed by the parallel worker.
6292 : : */
6293 : 257354 : *winfo = pa_find_worker(xid);
6294 : :
1015 6295 [ + + + + ]: 257354 : if (*winfo && (*winfo)->serialize_changes)
6296 : : {
6297 : 5037 : return TRANS_LEADER_PARTIAL_SERIALIZE;
6298 : : }
6299 [ + + ]: 252317 : else if (*winfo)
6300 : : {
6301 : 68914 : return TRANS_LEADER_SEND_TO_PARALLEL;
6302 : : }
6303 : :
6304 : : /*
6305 : : * If there is no parallel worker involved to process this transaction
6306 : : * then we either directly apply the change or serialize it to a file
6307 : : * which will later be applied when the transaction finish message is
6308 : : * processed.
6309 : : */
6310 [ + + ]: 183403 : else if (in_streamed_transaction)
6311 : : {
6312 : 103198 : return TRANS_LEADER_SERIALIZE;
6313 : : }
6314 : : else
6315 : : {
6316 : 80205 : return TRANS_LEADER_APPLY;
6317 : : }
6318 : : }
|