Age Owner Branch data TLA Line data Source code
1 : : /*-------------------------------------------------------------------------
2 : : *
3 : : * inval.c
4 : : * POSTGRES cache invalidation dispatcher code.
5 : : *
6 : : * This is subtle stuff, so pay attention:
7 : : *
8 : : * When a tuple is updated or deleted, our standard visibility rules
9 : : * consider that it is *still valid* so long as we are in the same command,
10 : : * ie, until the next CommandCounterIncrement() or transaction commit.
11 : : * (See access/heap/heapam_visibility.c, and note that system catalogs are
12 : : * generally scanned under the most current snapshot available, rather than
13 : : * the transaction snapshot.) At the command boundary, the old tuple stops
14 : : * being valid and the new version, if any, becomes valid. Therefore,
15 : : * we cannot simply flush a tuple from the system caches during heap_update()
16 : : * or heap_delete(). The tuple is still good at that point; what's more,
17 : : * even if we did flush it, it might be reloaded into the caches by a later
18 : : * request in the same command. So the correct behavior is to keep a list
19 : : * of outdated (updated/deleted) tuples and then do the required cache
20 : : * flushes at the next command boundary. We must also keep track of
21 : : * inserted tuples so that we can flush "negative" cache entries that match
22 : : * the new tuples; again, that mustn't happen until end of command.
23 : : *
24 : : * Once we have finished the command, we still need to remember inserted
25 : : * tuples (including new versions of updated tuples), so that we can flush
26 : : * them from the caches if we abort the transaction. Similarly, we'd better
27 : : * be able to flush "negative" cache entries that may have been loaded in
28 : : * place of deleted tuples, so we still need the deleted ones too.
29 : : *
30 : : * If we successfully complete the transaction, we have to broadcast all
31 : : * these invalidation events to other backends (via the SI message queue)
32 : : * so that they can flush obsolete entries from their caches. Note we have
33 : : * to record the transaction commit before sending SI messages, otherwise
34 : : * the other backends won't see our updated tuples as good.
35 : : *
36 : : * When a subtransaction aborts, we can process and discard any events
37 : : * it has queued. When a subtransaction commits, we just add its events
38 : : * to the pending lists of the parent transaction.
39 : : *
40 : : * In short, we need to remember until xact end every insert or delete
41 : : * of a tuple that might be in the system caches. Updates are treated as
42 : : * two events, delete + insert, for simplicity. (If the update doesn't
43 : : * change the tuple hash value, catcache.c optimizes this into one event.)
44 : : *
45 : : * We do not need to register EVERY tuple operation in this way, just those
46 : : * on tuples in relations that have associated catcaches. We do, however,
47 : : * have to register every operation on every tuple that *could* be in a
48 : : * catcache, whether or not it currently is in our cache. Also, if the
49 : : * tuple is in a relation that has multiple catcaches, we need to register
50 : : * an invalidation message for each such catcache. catcache.c's
51 : : * PrepareToInvalidateCacheTuple() routine provides the knowledge of which
52 : : * catcaches may need invalidation for a given tuple.
53 : : *
54 : : * Also, whenever we see an operation on a pg_class, pg_attribute, or
55 : : * pg_index tuple, we register a relcache flush operation for the relation
56 : : * described by that tuple (as specified in CacheInvalidateHeapTuple()).
57 : : * Likewise for pg_constraint tuples for foreign keys on relations.
58 : : *
59 : : * We keep the relcache flush requests in lists separate from the catcache
60 : : * tuple flush requests. This allows us to issue all the pending catcache
61 : : * flushes before we issue relcache flushes, which saves us from loading
62 : : * a catcache tuple during relcache load only to flush it again right away.
63 : : * Also, we avoid queuing multiple relcache flush requests for the same
64 : : * relation, since a relcache flush is relatively expensive to do.
65 : : * (XXX is it worth testing likewise for duplicate catcache flush entries?
66 : : * Probably not.)
67 : : *
68 : : * Many subsystems own higher-level caches that depend on relcache and/or
69 : : * catcache, and they register callbacks here to invalidate their caches.
70 : : * While building a higher-level cache entry, a backend may receive a
71 : : * callback for the being-built entry or one of its dependencies. This
72 : : * implies the new higher-level entry would be born stale, and it might
73 : : * remain stale for the life of the backend. Many caches do not prevent
74 : : * that. They rely on DDL for can't-miss catalog changes taking
75 : : * AccessExclusiveLock on suitable objects. (For a change made with less
76 : : * locking, backends might never read the change.) The relation cache,
77 : : * however, needs to reflect changes from CREATE INDEX CONCURRENTLY no later
78 : : * than the beginning of the next transaction. Hence, when a relevant
79 : : * invalidation callback arrives during a build, relcache.c reattempts that
80 : : * build. Caches with similar needs could do likewise.
81 : : *
82 : : * If a relcache flush is issued for a system relation that we preload
83 : : * from the relcache init file, we must also delete the init file so that
84 : : * it will be rebuilt during the next backend restart. The actual work of
85 : : * manipulating the init file is in relcache.c, but we keep track of the
86 : : * need for it here.
87 : : *
88 : : * Currently, inval messages are sent without regard for the possibility
89 : : * that the object described by the catalog tuple might be a session-local
90 : : * object such as a temporary table. This is because (1) this code has
91 : : * no practical way to tell the difference, and (2) it is not certain that
92 : : * other backends don't have catalog cache or even relcache entries for
93 : : * such tables, anyway; there is nothing that prevents that. It might be
94 : : * worth trying to avoid sending such inval traffic in the future, if those
95 : : * problems can be overcome cheaply.
96 : : *
97 : : * When making a nontransactional change to a cacheable object, we must
98 : : * likewise send the invalidation immediately, before ending the change's
99 : : * critical section. This includes inplace heap updates, relmap, and smgr.
100 : : *
101 : : * When wal_level=logical, write invalidations into WAL at each command end to
102 : : * support the decoding of the in-progress transactions. See
103 : : * CommandEndInvalidationMessages.
104 : : *
105 : : * Portions Copyright (c) 1996-2025, PostgreSQL Global Development Group
106 : : * Portions Copyright (c) 1994, Regents of the University of California
107 : : *
108 : : * IDENTIFICATION
109 : : * src/backend/utils/cache/inval.c
110 : : *
111 : : *-------------------------------------------------------------------------
112 : : */
113 : : #include "postgres.h"
114 : :
115 : : #include <limits.h>
116 : :
117 : : #include "access/htup_details.h"
118 : : #include "access/xact.h"
119 : : #include "access/xloginsert.h"
120 : : #include "catalog/catalog.h"
121 : : #include "catalog/pg_constraint.h"
122 : : #include "miscadmin.h"
123 : : #include "storage/procnumber.h"
124 : : #include "storage/sinval.h"
125 : : #include "storage/smgr.h"
126 : : #include "utils/catcache.h"
127 : : #include "utils/injection_point.h"
128 : : #include "utils/inval.h"
129 : : #include "utils/memdebug.h"
130 : : #include "utils/memutils.h"
131 : : #include "utils/rel.h"
132 : : #include "utils/relmapper.h"
133 : : #include "utils/snapmgr.h"
134 : : #include "utils/syscache.h"
135 : :
136 : :
137 : : /*
138 : : * Pending requests are stored as ready-to-send SharedInvalidationMessages.
139 : : * We keep the messages themselves in arrays in TopTransactionContext (there
140 : : * are separate arrays for catcache and relcache messages). For transactional
141 : : * messages, control information is kept in a chain of TransInvalidationInfo
142 : : * structs, also allocated in TopTransactionContext. (We could keep a
143 : : * subtransaction's TransInvalidationInfo in its CurTransactionContext; but
144 : : * that's more wasteful not less so, since in very many scenarios it'd be the
145 : : * only allocation in the subtransaction's CurTransactionContext.) For
146 : : * inplace update messages, control information appears in an
147 : : * InvalidationInfo, allocated in CurrentMemoryContext.
148 : : *
149 : : * We can store the message arrays densely, and yet avoid moving data around
150 : : * within an array, because within any one subtransaction we need only
151 : : * distinguish between messages emitted by prior commands and those emitted
152 : : * by the current command. Once a command completes and we've done local
153 : : * processing on its messages, we can fold those into the prior-commands
154 : : * messages just by changing array indexes in the TransInvalidationInfo
155 : : * struct. Similarly, we need distinguish messages of prior subtransactions
156 : : * from those of the current subtransaction only until the subtransaction
157 : : * completes, after which we adjust the array indexes in the parent's
158 : : * TransInvalidationInfo to include the subtransaction's messages. Inplace
159 : : * invalidations don't need a concept of command or subtransaction boundaries,
160 : : * since we send them during the WAL insertion critical section.
161 : : *
162 : : * The ordering of the individual messages within a command's or
163 : : * subtransaction's output is not considered significant, although this
164 : : * implementation happens to preserve the order in which they were queued.
165 : : * (Previous versions of this code did not preserve it.)
166 : : *
167 : : * For notational convenience, control information is kept in two-element
168 : : * arrays, the first for catcache messages and the second for relcache
169 : : * messages.
170 : : */
171 : : #define CatCacheMsgs 0
172 : : #define RelCacheMsgs 1
173 : :
174 : : /* Pointers to main arrays in TopTransactionContext */
175 : : typedef struct InvalMessageArray
176 : : {
177 : : SharedInvalidationMessage *msgs; /* palloc'd array (can be expanded) */
178 : : int maxmsgs; /* current allocated size of array */
179 : : } InvalMessageArray;
180 : :
181 : : static InvalMessageArray InvalMessageArrays[2];
182 : :
183 : : /* Control information for one logical group of messages */
184 : : typedef struct InvalidationMsgsGroup
185 : : {
186 : : int firstmsg[2]; /* first index in relevant array */
187 : : int nextmsg[2]; /* last+1 index */
188 : : } InvalidationMsgsGroup;
189 : :
190 : : /* Macros to help preserve InvalidationMsgsGroup abstraction */
191 : : #define SetSubGroupToFollow(targetgroup, priorgroup, subgroup) \
192 : : do { \
193 : : (targetgroup)->firstmsg[subgroup] = \
194 : : (targetgroup)->nextmsg[subgroup] = \
195 : : (priorgroup)->nextmsg[subgroup]; \
196 : : } while (0)
197 : :
198 : : #define SetGroupToFollow(targetgroup, priorgroup) \
199 : : do { \
200 : : SetSubGroupToFollow(targetgroup, priorgroup, CatCacheMsgs); \
201 : : SetSubGroupToFollow(targetgroup, priorgroup, RelCacheMsgs); \
202 : : } while (0)
203 : :
204 : : #define NumMessagesInSubGroup(group, subgroup) \
205 : : ((group)->nextmsg[subgroup] - (group)->firstmsg[subgroup])
206 : :
207 : : #define NumMessagesInGroup(group) \
208 : : (NumMessagesInSubGroup(group, CatCacheMsgs) + \
209 : : NumMessagesInSubGroup(group, RelCacheMsgs))
210 : :
211 : :
212 : : /*----------------
213 : : * Transactional invalidation messages are divided into two groups:
214 : : * 1) events so far in current command, not yet reflected to caches.
215 : : * 2) events in previous commands of current transaction; these have
216 : : * been reflected to local caches, and must be either broadcast to
217 : : * other backends or rolled back from local cache when we commit
218 : : * or abort the transaction.
219 : : * Actually, we need such groups for each level of nested transaction,
220 : : * so that we can discard events from an aborted subtransaction. When
221 : : * a subtransaction commits, we append its events to the parent's groups.
222 : : *
223 : : * The relcache-file-invalidated flag can just be a simple boolean,
224 : : * since we only act on it at transaction commit; we don't care which
225 : : * command of the transaction set it.
226 : : *----------------
227 : : */
228 : :
229 : : /* fields common to both transactional and inplace invalidation */
230 : : typedef struct InvalidationInfo
231 : : {
232 : : /* Events emitted by current command */
233 : : InvalidationMsgsGroup CurrentCmdInvalidMsgs;
234 : :
235 : : /* init file must be invalidated? */
236 : : bool RelcacheInitFileInval;
237 : : } InvalidationInfo;
238 : :
239 : : /* subclass adding fields specific to transactional invalidation */
240 : : typedef struct TransInvalidationInfo
241 : : {
242 : : /* Base class */
243 : : struct InvalidationInfo ii;
244 : :
245 : : /* Events emitted by previous commands of this (sub)transaction */
246 : : InvalidationMsgsGroup PriorCmdInvalidMsgs;
247 : :
248 : : /* Back link to parent transaction's info */
249 : : struct TransInvalidationInfo *parent;
250 : :
251 : : /* Subtransaction nesting depth */
252 : : int my_level;
253 : : } TransInvalidationInfo;
254 : :
255 : : static TransInvalidationInfo *transInvalInfo = NULL;
256 : :
257 : : static InvalidationInfo *inplaceInvalInfo = NULL;
258 : :
259 : : /* GUC storage */
260 : : int debug_discard_caches = 0;
261 : :
262 : : /*
263 : : * Dynamically-registered callback functions. Current implementation
264 : : * assumes there won't be enough of these to justify a dynamically resizable
265 : : * array; it'd be easy to improve that if needed.
266 : : *
267 : : * To avoid searching in CallSyscacheCallbacks, all callbacks for a given
268 : : * syscache are linked into a list pointed to by syscache_callback_links[id].
269 : : * The link values are syscache_callback_list[] index plus 1, or 0 for none.
270 : : */
271 : :
272 : : #define MAX_SYSCACHE_CALLBACKS 64
273 : : #define MAX_RELCACHE_CALLBACKS 10
274 : : #define MAX_RELSYNC_CALLBACKS 10
275 : :
276 : : static struct SYSCACHECALLBACK
277 : : {
278 : : int16 id; /* cache number */
279 : : int16 link; /* next callback index+1 for same cache */
280 : : SyscacheCallbackFunction function;
281 : : Datum arg;
282 : : } syscache_callback_list[MAX_SYSCACHE_CALLBACKS];
283 : :
284 : : static int16 syscache_callback_links[SysCacheSize];
285 : :
286 : : static int syscache_callback_count = 0;
287 : :
288 : : static struct RELCACHECALLBACK
289 : : {
290 : : RelcacheCallbackFunction function;
291 : : Datum arg;
292 : : } relcache_callback_list[MAX_RELCACHE_CALLBACKS];
293 : :
294 : : static int relcache_callback_count = 0;
295 : :
296 : : static struct RELSYNCCALLBACK
297 : : {
298 : : RelSyncCallbackFunction function;
299 : : Datum arg;
300 : : } relsync_callback_list[MAX_RELSYNC_CALLBACKS];
301 : :
302 : : static int relsync_callback_count = 0;
303 : :
304 : :
305 : : /* ----------------------------------------------------------------
306 : : * Invalidation subgroup support functions
307 : : * ----------------------------------------------------------------
308 : : */
309 : :
310 : : /*
311 : : * AddInvalidationMessage
312 : : * Add an invalidation message to a (sub)group.
313 : : *
314 : : * The group must be the last active one, since we assume we can add to the
315 : : * end of the relevant InvalMessageArray.
316 : : *
317 : : * subgroup must be CatCacheMsgs or RelCacheMsgs.
318 : : */
319 : : static void
1482 tgl@sss.pgh.pa.us 320 :CBC 3411592 : AddInvalidationMessage(InvalidationMsgsGroup *group, int subgroup,
321 : : const SharedInvalidationMessage *msg)
322 : : {
323 : 3411592 : InvalMessageArray *ima = &InvalMessageArrays[subgroup];
324 : 3411592 : int nextindex = group->nextmsg[subgroup];
325 : :
326 [ + + ]: 3411592 : if (nextindex >= ima->maxmsgs)
327 : : {
328 [ + + ]: 263628 : if (ima->msgs == NULL)
329 : : {
330 : : /* Create new storage array in TopTransactionContext */
331 : 235123 : int reqsize = 32; /* arbitrary */
332 : :
333 : 235123 : ima->msgs = (SharedInvalidationMessage *)
334 : 235123 : MemoryContextAlloc(TopTransactionContext,
335 : : reqsize * sizeof(SharedInvalidationMessage));
336 : 235123 : ima->maxmsgs = reqsize;
337 [ - + ]: 235123 : Assert(nextindex == 0);
338 : : }
339 : : else
340 : : {
341 : : /* Enlarge storage array */
342 : 28505 : int reqsize = 2 * ima->maxmsgs;
343 : :
344 : 28505 : ima->msgs = (SharedInvalidationMessage *)
345 : 28505 : repalloc(ima->msgs,
346 : : reqsize * sizeof(SharedInvalidationMessage));
347 : 28505 : ima->maxmsgs = reqsize;
348 : : }
349 : : }
350 : : /* Okay, add message to current group */
351 : 3411592 : ima->msgs[nextindex] = *msg;
352 : 3411592 : group->nextmsg[subgroup]++;
10651 scrappy@hub.org 353 : 3411592 : }
354 : :
355 : : /*
356 : : * Append one subgroup of invalidation messages to another, resetting
357 : : * the source subgroup to empty.
358 : : */
359 : : static void
1482 tgl@sss.pgh.pa.us 360 : 1045048 : AppendInvalidationMessageSubGroup(InvalidationMsgsGroup *dest,
361 : : InvalidationMsgsGroup *src,
362 : : int subgroup)
363 : : {
364 : : /* Messages must be adjacent in main array */
365 [ - + ]: 1045048 : Assert(dest->nextmsg[subgroup] == src->firstmsg[subgroup]);
366 : :
367 : : /* ... which makes this easy: */
368 : 1045048 : dest->nextmsg[subgroup] = src->nextmsg[subgroup];
369 : :
370 : : /*
371 : : * This is handy for some callers and irrelevant for others. But we do it
372 : : * always, reasoning that it's bad to leave different groups pointing at
373 : : * the same fragment of the message array.
374 : : */
375 : 1045048 : SetSubGroupToFollow(src, dest, subgroup);
8588 376 : 1045048 : }
377 : :
378 : : /*
379 : : * Process a subgroup of invalidation messages.
380 : : *
381 : : * This is a macro that executes the given code fragment for each message in
382 : : * a message subgroup. The fragment should refer to the message as *msg.
383 : : */
384 : : #define ProcessMessageSubGroup(group, subgroup, codeFragment) \
385 : : do { \
386 : : int _msgindex = (group)->firstmsg[subgroup]; \
387 : : int _endmsg = (group)->nextmsg[subgroup]; \
388 : : for (; _msgindex < _endmsg; _msgindex++) \
389 : : { \
390 : : SharedInvalidationMessage *msg = \
391 : : &InvalMessageArrays[subgroup].msgs[_msgindex]; \
392 : : codeFragment; \
393 : : } \
394 : : } while (0)
395 : :
396 : : /*
397 : : * Process a subgroup of invalidation messages as an array.
398 : : *
399 : : * As above, but the code fragment can handle an array of messages.
400 : : * The fragment should refer to the messages as msgs[], with n entries.
401 : : */
402 : : #define ProcessMessageSubGroupMulti(group, subgroup, codeFragment) \
403 : : do { \
404 : : int n = NumMessagesInSubGroup(group, subgroup); \
405 : : if (n > 0) { \
406 : : SharedInvalidationMessage *msgs = \
407 : : &InvalMessageArrays[subgroup].msgs[(group)->firstmsg[subgroup]]; \
408 : : codeFragment; \
409 : : } \
410 : : } while (0)
411 : :
412 : :
413 : : /* ----------------------------------------------------------------
414 : : * Invalidation group support functions
415 : : *
416 : : * These routines understand about the division of a logical invalidation
417 : : * group into separate physical arrays for catcache and relcache entries.
418 : : * ----------------------------------------------------------------
419 : : */
420 : :
421 : : /*
422 : : * Add a catcache inval entry
423 : : */
424 : : static void
1482 425 : 2759958 : AddCatcacheInvalidationMessage(InvalidationMsgsGroup *group,
426 : : int id, uint32 hashValue, Oid dbId)
427 : : {
428 : : SharedInvalidationMessage msg;
429 : :
5503 rhaas@postgresql.org 430 [ - + ]: 2759958 : Assert(id < CHAR_MAX);
431 : 2759958 : msg.cc.id = (int8) id;
8588 tgl@sss.pgh.pa.us 432 : 2759958 : msg.cc.dbId = dbId;
433 : 2759958 : msg.cc.hashValue = hashValue;
434 : :
435 : : /*
436 : : * Define padding bytes in SharedInvalidationMessage structs to be
437 : : * defined. Otherwise the sinvaladt.c ringbuffer, which is accessed by
438 : : * multiple processes, will cause spurious valgrind warnings about
439 : : * undefined memory being used. That's because valgrind remembers the
440 : : * undefined bytes from the last local process's store, not realizing that
441 : : * another process has written since, filling the previously uninitialized
442 : : * bytes
443 : : */
444 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
445 : :
1482 446 : 2759958 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
9371 inoue@tpf.co.jp 447 : 2759958 : }
448 : :
449 : : /*
450 : : * Add a whole-catalog inval entry
451 : : */
452 : : static void
1482 tgl@sss.pgh.pa.us 453 : 100 : AddCatalogInvalidationMessage(InvalidationMsgsGroup *group,
454 : : Oid dbId, Oid catId)
455 : : {
456 : : SharedInvalidationMessage msg;
457 : :
5690 458 : 100 : msg.cat.id = SHAREDINVALCATALOG_ID;
459 : 100 : msg.cat.dbId = dbId;
460 : 100 : msg.cat.catId = catId;
461 : : /* check AddCatcacheInvalidationMessage() for an explanation */
462 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
463 : :
1482 464 : 100 : AddInvalidationMessage(group, CatCacheMsgs, &msg);
5690 465 : 100 : }
466 : :
467 : : /*
468 : : * Add a relcache inval entry
469 : : */
470 : : static void
1482 471 : 980542 : AddRelcacheInvalidationMessage(InvalidationMsgsGroup *group,
472 : : Oid dbId, Oid relId)
473 : : {
474 : : SharedInvalidationMessage msg;
475 : :
476 : : /*
477 : : * Don't add a duplicate item. We assume dbId need not be checked because
478 : : * it will never change. InvalidOid for relId means all relations so we
479 : : * don't need to add individual ones when it is present.
480 : : */
481 [ + + + + : 3147068 : ProcessMessageSubGroup(group, RelCacheMsgs,
- + + + ]
482 : : if (msg->rc.id == SHAREDINVALRELCACHE_ID &&
483 : : (msg->rc.relId == relId ||
484 : : msg->rc.relId == InvalidOid))
485 : : return);
486 : :
487 : : /* OK, add the item */
8845 488 : 388352 : msg.rc.id = SHAREDINVALRELCACHE_ID;
489 : 388352 : msg.rc.dbId = dbId;
490 : 388352 : msg.rc.relId = relId;
491 : : /* check AddCatcacheInvalidationMessage() for an explanation */
492 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
493 : :
1482 494 : 388352 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
495 : : }
496 : :
497 : : /*
498 : : * Add a relsync inval entry
499 : : *
500 : : * We put these into the relcache subgroup for simplicity. This message is the
501 : : * same as AddRelcacheInvalidationMessage() except that it is for
502 : : * RelationSyncCache maintained by decoding plugin pgoutput.
503 : : */
504 : : static void
177 akapila@postgresql.o 505 : 6 : AddRelsyncInvalidationMessage(InvalidationMsgsGroup *group,
506 : : Oid dbId, Oid relId)
507 : : {
508 : : SharedInvalidationMessage msg;
509 : :
510 : : /* Don't add a duplicate item. */
511 [ - - - - : 6 : ProcessMessageSubGroup(group, RelCacheMsgs,
- - - + ]
512 : : if (msg->rc.id == SHAREDINVALRELSYNC_ID &&
513 : : (msg->rc.relId == relId ||
514 : : msg->rc.relId == InvalidOid))
515 : : return);
516 : :
517 : : /* OK, add the item */
518 : 6 : msg.rc.id = SHAREDINVALRELSYNC_ID;
519 : 6 : msg.rc.dbId = dbId;
520 : 6 : msg.rc.relId = relId;
521 : : /* check AddCatcacheInvalidationMessage() for an explanation */
522 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
523 : :
524 : 6 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
525 : : }
526 : :
527 : : /*
528 : : * Add a snapshot inval entry
529 : : *
530 : : * We put these into the relcache subgroup for simplicity.
531 : : */
532 : : static void
1482 tgl@sss.pgh.pa.us 533 : 526727 : AddSnapshotInvalidationMessage(InvalidationMsgsGroup *group,
534 : : Oid dbId, Oid relId)
535 : : {
536 : : SharedInvalidationMessage msg;
537 : :
538 : : /* Don't add a duplicate item */
539 : : /* We assume dbId need not be checked because it will never change */
540 [ + + + + : 761720 : ProcessMessageSubGroup(group, RelCacheMsgs,
+ + ]
541 : : if (msg->sn.id == SHAREDINVALSNAPSHOT_ID &&
542 : : msg->sn.relId == relId)
543 : : return);
544 : :
545 : : /* OK, add the item */
4449 rhaas@postgresql.org 546 : 263176 : msg.sn.id = SHAREDINVALSNAPSHOT_ID;
547 : 263176 : msg.sn.dbId = dbId;
548 : 263176 : msg.sn.relId = relId;
549 : : /* check AddCatcacheInvalidationMessage() for an explanation */
550 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
551 : :
1482 tgl@sss.pgh.pa.us 552 : 263176 : AddInvalidationMessage(group, RelCacheMsgs, &msg);
553 : : }
554 : :
555 : : /*
556 : : * Append one group of invalidation messages to another, resetting
557 : : * the source group to empty.
558 : : */
559 : : static void
560 : 522524 : AppendInvalidationMessages(InvalidationMsgsGroup *dest,
561 : : InvalidationMsgsGroup *src)
562 : : {
563 : 522524 : AppendInvalidationMessageSubGroup(dest, src, CatCacheMsgs);
564 : 522524 : AppendInvalidationMessageSubGroup(dest, src, RelCacheMsgs);
8588 565 : 522524 : }
566 : :
567 : : /*
568 : : * Execute the given function for all the messages in an invalidation group.
569 : : * The group is not altered.
570 : : *
571 : : * catcache entries are processed first, for reasons mentioned above.
572 : : */
573 : : static void
1482 574 : 417165 : ProcessInvalidationMessages(InvalidationMsgsGroup *group,
575 : : void (*func) (SharedInvalidationMessage *msg))
576 : : {
577 [ + + ]: 3036540 : ProcessMessageSubGroup(group, CatCacheMsgs, func(msg));
578 [ + + ]: 985389 : ProcessMessageSubGroup(group, RelCacheMsgs, func(msg));
8845 579 : 417162 : }
580 : :
581 : : /*
582 : : * As above, but the function is able to process an array of messages
583 : : * rather than just one at a time.
584 : : */
585 : : static void
1482 586 : 156141 : ProcessInvalidationMessagesMulti(InvalidationMsgsGroup *group,
587 : : void (*func) (const SharedInvalidationMessage *msgs, int n))
588 : : {
589 [ + + ]: 156141 : ProcessMessageSubGroupMulti(group, CatCacheMsgs, func(msgs, n));
590 [ + + ]: 156141 : ProcessMessageSubGroupMulti(group, RelCacheMsgs, func(msgs, n));
6288 591 : 156141 : }
592 : :
593 : : /* ----------------------------------------------------------------
594 : : * private support functions
595 : : * ----------------------------------------------------------------
596 : : */
597 : :
598 : : /*
599 : : * RegisterCatcacheInvalidation
600 : : *
601 : : * Register an invalidation event for a catcache tuple entry.
602 : : */
603 : : static void
8845 604 : 2759958 : RegisterCatcacheInvalidation(int cacheId,
605 : : uint32 hashValue,
606 : : Oid dbId,
607 : : void *context)
608 : : {
316 noah@leadboat.com 609 : 2759958 : InvalidationInfo *info = (InvalidationInfo *) context;
610 : :
611 : 2759958 : AddCatcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs,
612 : : cacheId, hashValue, dbId);
10651 scrappy@hub.org 613 : 2759958 : }
614 : :
615 : : /*
616 : : * RegisterCatalogInvalidation
617 : : *
618 : : * Register an invalidation event for all catcache entries from a catalog.
619 : : */
620 : : static void
316 noah@leadboat.com 621 : 100 : RegisterCatalogInvalidation(InvalidationInfo *info, Oid dbId, Oid catId)
622 : : {
623 : 100 : AddCatalogInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, catId);
5690 tgl@sss.pgh.pa.us 624 : 100 : }
625 : :
626 : : /*
627 : : * RegisterRelcacheInvalidation
628 : : *
629 : : * As above, but register a relcache invalidation event.
630 : : */
631 : : static void
316 noah@leadboat.com 632 : 980542 : RegisterRelcacheInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
633 : : {
634 : 980542 : AddRelcacheInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
635 : :
636 : : /*
637 : : * Most of the time, relcache invalidation is associated with system
638 : : * catalog updates, but there are a few cases where it isn't. Quick hack
639 : : * to ensure that the next CommandCounterIncrement() will think that we
640 : : * need to do CommandEndInvalidationMessages().
641 : : */
6490 tgl@sss.pgh.pa.us 642 : 980542 : (void) GetCurrentCommandId(true);
643 : :
644 : : /*
645 : : * If the relation being invalidated is one of those cached in a relcache
646 : : * init file, mark that we need to zap that file at commit. For simplicity
647 : : * invalidations for a specific database always invalidate the shared file
648 : : * as well. Also zap when we are invalidating whole relcache.
649 : : */
2643 andres@anarazel.de 650 [ + + + + ]: 980542 : if (relId == InvalidOid || RelationIdIsInInitFile(relId))
316 noah@leadboat.com 651 : 55401 : info->RelcacheInitFileInval = true;
9371 inoue@tpf.co.jp 652 : 980542 : }
653 : :
654 : : /*
655 : : * RegisterRelsyncInvalidation
656 : : *
657 : : * As above, but register a relsynccache invalidation event.
658 : : */
659 : : static void
177 akapila@postgresql.o 660 : 6 : RegisterRelsyncInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
661 : : {
662 : 6 : AddRelsyncInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
663 : 6 : }
664 : :
665 : : /*
666 : : * RegisterSnapshotInvalidation
667 : : *
668 : : * Register an invalidation event for MVCC scans against a given catalog.
669 : : * Only needed for catalogs that don't have catcaches.
670 : : */
671 : : static void
316 noah@leadboat.com 672 : 526727 : RegisterSnapshotInvalidation(InvalidationInfo *info, Oid dbId, Oid relId)
673 : : {
674 : 526727 : AddSnapshotInvalidationMessage(&info->CurrentCmdInvalidMsgs, dbId, relId);
4449 rhaas@postgresql.org 675 : 526727 : }
676 : :
677 : : /*
678 : : * PrepareInvalidationState
679 : : * Initialize inval data for the current (sub)transaction.
680 : : */
681 : : static InvalidationInfo *
669 michael@paquier.xyz 682 : 2084392 : PrepareInvalidationState(void)
683 : : {
684 : : TransInvalidationInfo *myInfo;
685 : :
686 : : /* PrepareToInvalidateCacheTuple() needs relcache */
142 noah@leadboat.com 687 : 2084392 : AssertCouldGetRelation();
688 : : /* Can't queue transactional message while collecting inplace messages. */
316 689 [ - + ]: 2084392 : Assert(inplaceInvalInfo == NULL);
690 : :
669 michael@paquier.xyz 691 [ + + + + ]: 4058630 : if (transInvalInfo != NULL &&
692 : 1974238 : transInvalInfo->my_level == GetCurrentTransactionNestLevel())
316 noah@leadboat.com 693 : 1974176 : return (InvalidationInfo *) transInvalInfo;
694 : :
695 : : myInfo = (TransInvalidationInfo *)
669 michael@paquier.xyz 696 : 110216 : MemoryContextAllocZero(TopTransactionContext,
697 : : sizeof(TransInvalidationInfo));
698 : 110216 : myInfo->parent = transInvalInfo;
699 : 110216 : myInfo->my_level = GetCurrentTransactionNestLevel();
700 : :
701 : : /* Now, do we have a previous stack entry? */
702 [ + + ]: 110216 : if (transInvalInfo != NULL)
703 : : {
704 : : /* Yes; this one should be for a deeper nesting level. */
705 [ - + ]: 62 : Assert(myInfo->my_level > transInvalInfo->my_level);
706 : :
707 : : /*
708 : : * The parent (sub)transaction must not have any current (i.e.,
709 : : * not-yet-locally-processed) messages. If it did, we'd have a
710 : : * semantic problem: the new subtransaction presumably ought not be
711 : : * able to see those events yet, but since the CommandCounter is
712 : : * linear, that can't work once the subtransaction advances the
713 : : * counter. This is a convenient place to check for that, as well as
714 : : * being important to keep management of the message arrays simple.
715 : : */
316 noah@leadboat.com 716 [ - + ]: 62 : if (NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs) != 0)
669 michael@paquier.xyz 717 [ # # ]:UBC 0 : elog(ERROR, "cannot start a subtransaction when there are unprocessed inval messages");
718 : :
719 : : /*
720 : : * MemoryContextAllocZero set firstmsg = nextmsg = 0 in each group,
721 : : * which is fine for the first (sub)transaction, but otherwise we need
722 : : * to update them to follow whatever is already in the arrays.
723 : : */
669 michael@paquier.xyz 724 :CBC 62 : SetGroupToFollow(&myInfo->PriorCmdInvalidMsgs,
725 : : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
316 noah@leadboat.com 726 : 62 : SetGroupToFollow(&myInfo->ii.CurrentCmdInvalidMsgs,
727 : : &myInfo->PriorCmdInvalidMsgs);
728 : : }
729 : : else
730 : : {
731 : : /*
732 : : * Here, we need only clear any array pointers left over from a prior
733 : : * transaction.
734 : : */
669 michael@paquier.xyz 735 : 110154 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
736 : 110154 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
737 : 110154 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
738 : 110154 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
739 : : }
740 : :
741 : 110216 : transInvalInfo = myInfo;
316 noah@leadboat.com 742 : 110216 : return (InvalidationInfo *) myInfo;
743 : : }
744 : :
745 : : /*
746 : : * PrepareInplaceInvalidationState
747 : : * Initialize inval data for an inplace update.
748 : : *
749 : : * See previous function for more background.
750 : : */
751 : : static InvalidationInfo *
752 : 73346 : PrepareInplaceInvalidationState(void)
753 : : {
754 : : InvalidationInfo *myInfo;
755 : :
142 756 : 73346 : AssertCouldGetRelation();
757 : : /* limit of one inplace update under assembly */
316 758 [ - + ]: 73346 : Assert(inplaceInvalInfo == NULL);
759 : :
760 : : /* gone after WAL insertion CritSection ends, so use current context */
761 : 73346 : myInfo = (InvalidationInfo *) palloc0(sizeof(InvalidationInfo));
762 : :
763 : : /* Stash our messages past end of the transactional messages, if any. */
764 [ + + ]: 73346 : if (transInvalInfo != NULL)
765 : 53805 : SetGroupToFollow(&myInfo->CurrentCmdInvalidMsgs,
766 : : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
767 : : else
768 : : {
769 : 19541 : InvalMessageArrays[CatCacheMsgs].msgs = NULL;
770 : 19541 : InvalMessageArrays[CatCacheMsgs].maxmsgs = 0;
771 : 19541 : InvalMessageArrays[RelCacheMsgs].msgs = NULL;
772 : 19541 : InvalMessageArrays[RelCacheMsgs].maxmsgs = 0;
773 : : }
774 : :
775 : 73346 : inplaceInvalInfo = myInfo;
776 : 73346 : return myInfo;
777 : : }
778 : :
779 : : /* ----------------------------------------------------------------
780 : : * public functions
781 : : * ----------------------------------------------------------------
782 : : */
783 : :
784 : : void
669 michael@paquier.xyz 785 : 2045 : InvalidateSystemCachesExtended(bool debug_discard)
786 : : {
787 : : int i;
788 : :
789 : 2045 : InvalidateCatalogSnapshot();
235 heikki.linnakangas@i 790 : 2045 : ResetCatalogCachesExt(debug_discard);
669 michael@paquier.xyz 791 : 2045 : RelationCacheInvalidate(debug_discard); /* gets smgr and relmap too */
792 : :
793 [ + + ]: 35099 : for (i = 0; i < syscache_callback_count; i++)
794 : : {
795 : 33054 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
796 : :
797 : 33054 : ccitem->function(ccitem->arg, ccitem->id, 0);
798 : : }
799 : :
800 [ + + ]: 4675 : for (i = 0; i < relcache_callback_count; i++)
801 : : {
802 : 2630 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
803 : :
804 : 2630 : ccitem->function(ccitem->arg, InvalidOid);
805 : : }
806 : :
177 akapila@postgresql.o 807 [ + + ]: 2065 : for (i = 0; i < relsync_callback_count; i++)
808 : : {
809 : 20 : struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
810 : :
811 : 20 : ccitem->function(ccitem->arg, InvalidOid);
812 : : }
669 michael@paquier.xyz 813 : 2045 : }
814 : :
815 : : /*
816 : : * LocalExecuteInvalidationMessage
817 : : *
818 : : * Process a single invalidation message (which could be of any type).
819 : : * Only the local caches are flushed; this does not transmit the message
820 : : * to other backends.
821 : : */
822 : : void
8845 tgl@sss.pgh.pa.us 823 : 18955312 : LocalExecuteInvalidationMessage(SharedInvalidationMessage *msg)
824 : : {
825 [ + + ]: 18955312 : if (msg->id >= 0)
826 : : {
5690 827 [ + + + + ]: 15282796 : if (msg->cc.dbId == MyDatabaseId || msg->cc.dbId == InvalidOid)
828 : : {
4449 rhaas@postgresql.org 829 : 11414429 : InvalidateCatalogSnapshot();
830 : :
3039 tgl@sss.pgh.pa.us 831 : 11414429 : SysCacheInvalidate(msg->cc.id, msg->cc.hashValue);
832 : :
5135 833 : 11414429 : CallSyscacheCallbacks(msg->cc.id, msg->cc.hashValue);
834 : : }
835 : : }
5690 836 [ + + ]: 3672516 : else if (msg->id == SHAREDINVALCATALOG_ID)
837 : : {
838 [ + + + + ]: 388 : if (msg->cat.dbId == MyDatabaseId || msg->cat.dbId == InvalidOid)
839 : : {
4449 rhaas@postgresql.org 840 : 334 : InvalidateCatalogSnapshot();
841 : :
5690 tgl@sss.pgh.pa.us 842 : 334 : CatalogCacheFlushCatalog(msg->cat.catId);
843 : :
844 : : /* CatalogCacheFlushCatalog calls CallSyscacheCallbacks as needed */
845 : : }
846 : : }
8845 847 [ + + ]: 3672128 : else if (msg->id == SHAREDINVALRELCACHE_ID)
848 : : {
7879 849 [ + + + + ]: 1965212 : if (msg->rc.dbId == MyDatabaseId || msg->rc.dbId == InvalidOid)
850 : : {
851 : : int i;
852 : :
3152 peter_e@gmx.net 853 [ + + ]: 1467891 : if (msg->rc.relId == InvalidOid)
1414 noah@leadboat.com 854 : 253 : RelationCacheInvalidate(false);
855 : : else
3152 peter_e@gmx.net 856 : 1467638 : RelationCacheInvalidateEntry(msg->rc.relId);
857 : :
6206 tgl@sss.pgh.pa.us 858 [ + + ]: 4040794 : for (i = 0; i < relcache_callback_count; i++)
859 : : {
860 : 2572906 : struct RELCACHECALLBACK *ccitem = relcache_callback_list + i;
861 : :
2921 peter_e@gmx.net 862 : 2572906 : ccitem->function(ccitem->arg, msg->rc.relId);
863 : : }
864 : : }
865 : : }
7544 tgl@sss.pgh.pa.us 866 [ + + ]: 1706916 : else if (msg->id == SHAREDINVALSMGR_ID)
867 : : {
868 : : /*
869 : : * We could have smgr entries for relations of other databases, so no
870 : : * short-circuit test is possible here.
871 : : */
872 : : RelFileLocatorBackend rlocator;
873 : :
1074 rhaas@postgresql.org 874 : 233483 : rlocator.locator = msg->sm.rlocator;
1158 875 : 233483 : rlocator.backend = (msg->sm.backend_hi << 16) | (int) msg->sm.backend_lo;
584 heikki.linnakangas@i 876 : 233483 : smgrreleaserellocator(rlocator);
877 : : }
5690 tgl@sss.pgh.pa.us 878 [ + + ]: 1473433 : else if (msg->id == SHAREDINVALRELMAP_ID)
879 : : {
880 : : /* We only care about our own database and shared catalogs */
881 [ + + ]: 238 : if (msg->rm.dbId == InvalidOid)
882 : 137 : RelationMapInvalidate(true);
883 [ + + ]: 101 : else if (msg->rm.dbId == MyDatabaseId)
884 : 68 : RelationMapInvalidate(false);
885 : : }
4449 rhaas@postgresql.org 886 [ + + ]: 1473195 : else if (msg->id == SHAREDINVALSNAPSHOT_ID)
887 : : {
888 : : /* We only care about our own database and shared catalogs */
1713 michael@paquier.xyz 889 [ + + ]: 1473164 : if (msg->sn.dbId == InvalidOid)
4449 rhaas@postgresql.org 890 : 49927 : InvalidateCatalogSnapshot();
1713 michael@paquier.xyz 891 [ + + ]: 1423237 : else if (msg->sn.dbId == MyDatabaseId)
4449 rhaas@postgresql.org 892 : 1080052 : InvalidateCatalogSnapshot();
893 : : }
177 akapila@postgresql.o 894 [ + - ]: 31 : else if (msg->id == SHAREDINVALRELSYNC_ID)
895 : : {
896 : : /* We only care about our own database */
897 [ + - ]: 31 : if (msg->rs.dbId == MyDatabaseId)
898 : 31 : CallRelSyncCallbacks(msg->rs.relid);
899 : : }
900 : : else
5193 peter_e@gmx.net 901 [ # # ]:UBC 0 : elog(FATAL, "unrecognized SI message ID: %d", msg->id);
10651 scrappy@hub.org 902 :CBC 18955309 : }
903 : :
904 : : /*
905 : : * InvalidateSystemCaches
906 : : *
907 : : * This blows away all tuples in the system catalog caches and
908 : : * all the cached relation descriptors and smgr cache entries.
909 : : * Relation descriptors that have positive refcounts are then rebuilt.
910 : : *
911 : : * We call this when we see a shared-inval-queue overflow signal,
912 : : * since that tells us we've lost some shared-inval messages and hence
913 : : * don't know what needs to be invalidated.
914 : : */
915 : : void
8846 tgl@sss.pgh.pa.us 916 : 2045 : InvalidateSystemCaches(void)
917 : : {
1414 noah@leadboat.com 918 : 2045 : InvalidateSystemCachesExtended(false);
919 : 2045 : }
920 : :
921 : : /*
922 : : * AcceptInvalidationMessages
923 : : * Read and process invalidation messages from the shared invalidation
924 : : * message queue.
925 : : *
926 : : * Note:
927 : : * This should be called as the first step in processing a transaction.
928 : : */
929 : : void
8845 tgl@sss.pgh.pa.us 930 : 16979998 : AcceptInvalidationMessages(void)
931 : : {
932 : : #ifdef USE_ASSERT_CHECKING
933 : : /* message handlers shall access catalogs only during transactions */
142 noah@leadboat.com 934 [ + + ]: 16979998 : if (IsTransactionState())
935 : 16657684 : AssertCouldGetRelation();
936 : : #endif
937 : :
8845 tgl@sss.pgh.pa.us 938 : 16979998 : ReceiveSharedInvalidMessages(LocalExecuteInvalidationMessage,
939 : : InvalidateSystemCaches);
940 : :
941 : : /*----------
942 : : * Test code to force cache flushes anytime a flush could happen.
943 : : *
944 : : * This helps detect intermittent faults caused by code that reads a cache
945 : : * entry and then performs an action that could invalidate the entry, but
946 : : * rarely actually does so. This can spot issues that would otherwise
947 : : * only arise with badly timed concurrent DDL, for example.
948 : : *
949 : : * The default debug_discard_caches = 0 does no forced cache flushes.
950 : : *
951 : : * If used with CLOBBER_FREED_MEMORY,
952 : : * debug_discard_caches = 1 (formerly known as CLOBBER_CACHE_ALWAYS)
953 : : * provides a fairly thorough test that the system contains no cache-flush
954 : : * hazards. However, it also makes the system unbelievably slow --- the
955 : : * regression tests take about 100 times longer than normal.
956 : : *
957 : : * If you're a glutton for punishment, try
958 : : * debug_discard_caches = 3 (formerly known as CLOBBER_CACHE_RECURSIVELY).
959 : : * This slows things by at least a factor of 10000, so I wouldn't suggest
960 : : * trying to run the entire regression tests that way. It's useful to try
961 : : * a few simple tests, to make sure that cache reload isn't subject to
962 : : * internal cache-flush hazards, but after you've done a few thousand
963 : : * recursive reloads it's unlikely you'll learn more.
964 : : *----------
965 : : */
966 : : #ifdef DISCARD_CACHES_ENABLED
967 : : {
968 : : static int recursion_depth = 0;
969 : :
1516 970 [ - + ]: 16979998 : if (recursion_depth < debug_discard_caches)
971 : : {
2556 tgl@sss.pgh.pa.us 972 :UBC 0 : recursion_depth++;
1414 noah@leadboat.com 973 : 0 : InvalidateSystemCachesExtended(true);
2556 tgl@sss.pgh.pa.us 974 : 0 : recursion_depth--;
975 : : }
976 : : }
977 : : #endif
10651 scrappy@hub.org 978 :CBC 16979998 : }
979 : :
980 : : /*
981 : : * PostPrepare_Inval
982 : : * Clean up after successful PREPARE.
983 : : *
984 : : * Here, we want to act as though the transaction aborted, so that we will
985 : : * undo any syscache changes it made, thereby bringing us into sync with the
986 : : * outside world, which doesn't believe the transaction committed yet.
987 : : *
988 : : * If the prepared transaction is later aborted, there is nothing more to
989 : : * do; if it commits, we will receive the consequent inval messages just
990 : : * like everyone else.
991 : : */
992 : : void
7386 tgl@sss.pgh.pa.us 993 : 290 : PostPrepare_Inval(void)
994 : : {
995 : 290 : AtEOXact_Inval(false);
996 : 290 : }
997 : :
998 : : /*
999 : : * xactGetCommittedInvalidationMessages() is called by
1000 : : * RecordTransactionCommit() to collect invalidation messages to add to the
1001 : : * commit record. This applies only to commit message types, never to
1002 : : * abort records. Must always run before AtEOXact_Inval(), since that
1003 : : * removes the data we need to see.
1004 : : *
1005 : : * Remember that this runs before we have officially committed, so we
1006 : : * must not do anything here to change what might occur *if* we should
1007 : : * fail between here and the actual commit.
1008 : : *
1009 : : * see also xact_redo_commit() and xact_desc_commit()
1010 : : */
1011 : : int
5740 simon@2ndQuadrant.co 1012 : 197061 : xactGetCommittedInvalidationMessages(SharedInvalidationMessage **msgs,
1013 : : bool *RelcacheInitFileInval)
1014 : : {
1015 : : SharedInvalidationMessage *msgarray;
1016 : : int nummsgs;
1017 : : int nmsgs;
1018 : :
1019 : : /* Quick exit if we haven't done anything with invalidation messages. */
3965 rhaas@postgresql.org 1020 [ + + ]: 197061 : if (transInvalInfo == NULL)
1021 : : {
1022 : 113809 : *RelcacheInitFileInval = false;
1023 : 113809 : *msgs = NULL;
1024 : 113809 : return 0;
1025 : : }
1026 : :
1027 : : /* Must be at top of stack */
1028 [ + - - + ]: 83252 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1029 : :
1030 : : /*
1031 : : * Relcache init file invalidation requires processing both before and
1032 : : * after we send the SI messages. However, we need not do anything unless
1033 : : * we committed.
1034 : : */
316 noah@leadboat.com 1035 : 83252 : *RelcacheInitFileInval = transInvalInfo->ii.RelcacheInitFileInval;
1036 : :
1037 : : /*
1038 : : * Collect all the pending messages into a single contiguous array of
1039 : : * invalidation messages, to simplify what needs to happen while building
1040 : : * the commit WAL message. Maintain the order that they would be
1041 : : * processed in by AtEOXact_Inval(), to ensure emulated behaviour in redo
1042 : : * is as similar as possible to original. We want the same bugs, if any,
1043 : : * not new ones.
1044 : : */
1482 tgl@sss.pgh.pa.us 1045 : 83252 : nummsgs = NumMessagesInGroup(&transInvalInfo->PriorCmdInvalidMsgs) +
316 noah@leadboat.com 1046 : 83252 : NumMessagesInGroup(&transInvalInfo->ii.CurrentCmdInvalidMsgs);
1047 : :
1482 tgl@sss.pgh.pa.us 1048 : 83252 : *msgs = msgarray = (SharedInvalidationMessage *)
1049 : 83252 : MemoryContextAlloc(CurTransactionContext,
1050 : : nummsgs * sizeof(SharedInvalidationMessage));
1051 : :
1052 : 83252 : nmsgs = 0;
1053 [ + + ]: 83252 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1054 : : CatCacheMsgs,
1055 : : (memcpy(msgarray + nmsgs,
1056 : : msgs,
1057 : : n * sizeof(SharedInvalidationMessage)),
1058 : : nmsgs += n));
316 noah@leadboat.com 1059 [ + + ]: 83252 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1060 : : CatCacheMsgs,
1061 : : (memcpy(msgarray + nmsgs,
1062 : : msgs,
1063 : : n * sizeof(SharedInvalidationMessage)),
1064 : : nmsgs += n));
1482 tgl@sss.pgh.pa.us 1065 [ + + ]: 83252 : ProcessMessageSubGroupMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1066 : : RelCacheMsgs,
1067 : : (memcpy(msgarray + nmsgs,
1068 : : msgs,
1069 : : n * sizeof(SharedInvalidationMessage)),
1070 : : nmsgs += n));
316 noah@leadboat.com 1071 [ + + ]: 83252 : ProcessMessageSubGroupMulti(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1072 : : RelCacheMsgs,
1073 : : (memcpy(msgarray + nmsgs,
1074 : : msgs,
1075 : : n * sizeof(SharedInvalidationMessage)),
1076 : : nmsgs += n));
1077 [ - + ]: 83252 : Assert(nmsgs == nummsgs);
1078 : :
1079 : 83252 : return nmsgs;
1080 : : }
1081 : :
1082 : : /*
1083 : : * inplaceGetInvalidationMessages() is called by the inplace update to collect
1084 : : * invalidation messages to add to its WAL record. Like the previous
1085 : : * function, we might still fail.
1086 : : */
1087 : : int
1088 : 49396 : inplaceGetInvalidationMessages(SharedInvalidationMessage **msgs,
1089 : : bool *RelcacheInitFileInval)
1090 : : {
1091 : : SharedInvalidationMessage *msgarray;
1092 : : int nummsgs;
1093 : : int nmsgs;
1094 : :
1095 : : /* Quick exit if we haven't done anything with invalidation messages. */
1096 [ + + ]: 49396 : if (inplaceInvalInfo == NULL)
1097 : : {
1098 : 14650 : *RelcacheInitFileInval = false;
1099 : 14650 : *msgs = NULL;
1100 : 14650 : return 0;
1101 : : }
1102 : :
1103 : 34746 : *RelcacheInitFileInval = inplaceInvalInfo->RelcacheInitFileInval;
1104 : 34746 : nummsgs = NumMessagesInGroup(&inplaceInvalInfo->CurrentCmdInvalidMsgs);
1105 : 34746 : *msgs = msgarray = (SharedInvalidationMessage *)
1106 : 34746 : palloc(nummsgs * sizeof(SharedInvalidationMessage));
1107 : :
1108 : 34746 : nmsgs = 0;
1109 [ + - ]: 34746 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1110 : : CatCacheMsgs,
1111 : : (memcpy(msgarray + nmsgs,
1112 : : msgs,
1113 : : n * sizeof(SharedInvalidationMessage)),
1114 : : nmsgs += n));
1115 [ + + ]: 34746 : ProcessMessageSubGroupMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1116 : : RelCacheMsgs,
1117 : : (memcpy(msgarray + nmsgs,
1118 : : msgs,
1119 : : n * sizeof(SharedInvalidationMessage)),
1120 : : nmsgs += n));
1482 tgl@sss.pgh.pa.us 1121 [ - + ]: 34746 : Assert(nmsgs == nummsgs);
1122 : :
1123 : 34746 : return nmsgs;
1124 : : }
1125 : :
1126 : : /*
1127 : : * ProcessCommittedInvalidationMessages is executed by xact_redo_commit() or
1128 : : * standby_redo() to process invalidation messages. Currently that happens
1129 : : * only at end-of-xact.
1130 : : *
1131 : : * Relcache init file invalidation requires processing both
1132 : : * before and after we send the SI messages. See AtEOXact_Inval()
1133 : : */
1134 : : void
5719 simon@2ndQuadrant.co 1135 : 27644 : ProcessCommittedInvalidationMessages(SharedInvalidationMessage *msgs,
1136 : : int nmsgs, bool RelcacheInitFileInval,
1137 : : Oid dbid, Oid tsid)
1138 : : {
5684 1139 [ + + ]: 27644 : if (nmsgs <= 0)
1140 : 5065 : return;
1141 : :
635 michael@paquier.xyz 1142 [ - + - - ]: 22579 : elog(DEBUG4, "replaying commit with %d messages%s", nmsgs,
1143 : : (RelcacheInitFileInval ? " and relcache file invalidation" : ""));
1144 : :
5719 simon@2ndQuadrant.co 1145 [ + + ]: 22579 : if (RelcacheInitFileInval)
1146 : : {
635 michael@paquier.xyz 1147 [ - + ]: 469 : elog(DEBUG4, "removing relcache init files for database %u", dbid);
1148 : :
1149 : : /*
1150 : : * RelationCacheInitFilePreInvalidate, when the invalidation message
1151 : : * is for a specific database, requires DatabasePath to be set, but we
1152 : : * should not use SetDatabasePath during recovery, since it is
1153 : : * intended to be used only once by normal backends. Hence, a quick
1154 : : * hack: set DatabasePath directly then unset after use.
1155 : : */
2643 andres@anarazel.de 1156 [ + - ]: 469 : if (OidIsValid(dbid))
1157 : 469 : DatabasePath = GetDatabasePath(dbid, tsid);
1158 : :
5135 tgl@sss.pgh.pa.us 1159 : 469 : RelationCacheInitFilePreInvalidate();
1160 : :
2643 andres@anarazel.de 1161 [ + - ]: 469 : if (OidIsValid(dbid))
1162 : : {
1163 : 469 : pfree(DatabasePath);
1164 : 469 : DatabasePath = NULL;
1165 : : }
1166 : : }
1167 : :
5719 simon@2ndQuadrant.co 1168 : 22579 : SendSharedInvalidMessages(msgs, nmsgs);
1169 : :
1170 [ + + ]: 22579 : if (RelcacheInitFileInval)
5135 tgl@sss.pgh.pa.us 1171 : 469 : RelationCacheInitFilePostInvalidate();
1172 : : }
1173 : :
1174 : : /*
1175 : : * AtEOXact_Inval
1176 : : * Process queued-up invalidation messages at end of main transaction.
1177 : : *
1178 : : * If isCommit, we must send out the messages in our PriorCmdInvalidMsgs list
1179 : : * to the shared invalidation message queue. Note that these will be read
1180 : : * not only by other backends, but also by our own backend at the next
1181 : : * transaction start (via AcceptInvalidationMessages). This means that
1182 : : * we can skip immediate local processing of anything that's still in
1183 : : * CurrentCmdInvalidMsgs, and just send that list out too.
1184 : : *
1185 : : * If not isCommit, we are aborting, and must locally process the messages
1186 : : * in PriorCmdInvalidMsgs. No messages need be sent to other backends,
1187 : : * since they'll not have seen our changed tuples anyway. We can forget
1188 : : * about CurrentCmdInvalidMsgs too, since those changes haven't touched
1189 : : * the caches yet.
1190 : : *
1191 : : * In any case, reset our state to empty. We need not physically
1192 : : * free memory here, since TopTransactionContext is about to be emptied
1193 : : * anyway.
1194 : : *
1195 : : * Note:
1196 : : * This should be called as the last step in processing a transaction.
1197 : : */
1198 : : void
7737 1199 : 319034 : AtEOXact_Inval(bool isCommit)
1200 : : {
316 noah@leadboat.com 1201 : 319034 : inplaceInvalInfo = NULL;
1202 : :
1203 : : /* Quick exit if no transactional messages */
3965 rhaas@postgresql.org 1204 [ + + ]: 319034 : if (transInvalInfo == NULL)
1205 : 208912 : return;
1206 : :
1207 : : /* Must be at top of stack */
1208 [ + - - + ]: 110122 : Assert(transInvalInfo->my_level == 1 && transInvalInfo->parent == NULL);
1209 : :
1210 : : INJECTION_POINT("transaction-end-process-inval", NULL);
1211 : :
8845 tgl@sss.pgh.pa.us 1212 [ + + ]: 110122 : if (isCommit)
1213 : : {
1214 : : /*
1215 : : * Relcache init file invalidation requires processing both before and
1216 : : * after we send the SI messages. However, we need not do anything
1217 : : * unless we committed.
1218 : : */
316 noah@leadboat.com 1219 [ + + ]: 107778 : if (transInvalInfo->ii.RelcacheInitFileInval)
5135 tgl@sss.pgh.pa.us 1220 : 10312 : RelationCacheInitFilePreInvalidate();
1221 : :
7737 1222 : 107778 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
316 noah@leadboat.com 1223 : 107778 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1224 : :
6288 tgl@sss.pgh.pa.us 1225 : 107778 : ProcessInvalidationMessagesMulti(&transInvalInfo->PriorCmdInvalidMsgs,
1226 : : SendSharedInvalidMessages);
1227 : :
316 noah@leadboat.com 1228 [ + + ]: 107778 : if (transInvalInfo->ii.RelcacheInitFileInval)
5135 tgl@sss.pgh.pa.us 1229 : 10312 : RelationCacheInitFilePostInvalidate();
1230 : : }
1231 : : else
1232 : : {
7737 1233 : 2344 : ProcessInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
1234 : : LocalExecuteInvalidationMessage);
1235 : : }
1236 : :
1237 : : /* Need not free anything explicitly */
1238 : 110122 : transInvalInfo = NULL;
1239 : : }
1240 : :
1241 : : /*
1242 : : * PreInplace_Inval
1243 : : * Process queued-up invalidation before inplace update critical section.
1244 : : *
1245 : : * Tasks belong here if they are safe even if the inplace update does not
1246 : : * complete. Currently, this just unlinks a cache file, which can fail. The
1247 : : * sum of this and AtInplace_Inval() mirrors AtEOXact_Inval(isCommit=true).
1248 : : */
1249 : : void
316 noah@leadboat.com 1250 : 63013 : PreInplace_Inval(void)
1251 : : {
1252 [ - + ]: 63013 : Assert(CritSectionCount == 0);
1253 : :
1254 [ + + + + ]: 63013 : if (inplaceInvalInfo && inplaceInvalInfo->RelcacheInitFileInval)
1255 : 9093 : RelationCacheInitFilePreInvalidate();
1256 : 63013 : }
1257 : :
1258 : : /*
1259 : : * AtInplace_Inval
1260 : : * Process queued-up invalidations after inplace update buffer mutation.
1261 : : */
1262 : : void
1263 : 63013 : AtInplace_Inval(void)
1264 : : {
1265 [ - + ]: 63013 : Assert(CritSectionCount > 0);
1266 : :
1267 [ + + ]: 63013 : if (inplaceInvalInfo == NULL)
1268 : 14650 : return;
1269 : :
1270 : 48363 : ProcessInvalidationMessagesMulti(&inplaceInvalInfo->CurrentCmdInvalidMsgs,
1271 : : SendSharedInvalidMessages);
1272 : :
1273 [ + + ]: 48363 : if (inplaceInvalInfo->RelcacheInitFileInval)
1274 : 9093 : RelationCacheInitFilePostInvalidate();
1275 : :
1276 : 48363 : inplaceInvalInfo = NULL;
1277 : : }
1278 : :
1279 : : /*
1280 : : * ForgetInplace_Inval
1281 : : * Alternative to PreInplace_Inval()+AtInplace_Inval(): discard queued-up
1282 : : * invalidations. This lets inplace update enumerate invalidations
1283 : : * optimistically, before locking the buffer.
1284 : : */
1285 : : void
308 1286 : 27983 : ForgetInplace_Inval(void)
1287 : : {
1288 : 27983 : inplaceInvalInfo = NULL;
1289 : 27983 : }
1290 : :
1291 : : /*
1292 : : * AtEOSubXact_Inval
1293 : : * Process queued-up invalidation messages at end of subtransaction.
1294 : : *
1295 : : * If isCommit, process CurrentCmdInvalidMsgs if any (there probably aren't),
1296 : : * and then attach both CurrentCmdInvalidMsgs and PriorCmdInvalidMsgs to the
1297 : : * parent's PriorCmdInvalidMsgs list.
1298 : : *
1299 : : * If not isCommit, we are aborting, and must locally process the messages
1300 : : * in PriorCmdInvalidMsgs. No messages need be sent to other backends.
1301 : : * We can forget about CurrentCmdInvalidMsgs too, since those changes haven't
1302 : : * touched the caches yet.
1303 : : *
1304 : : * In any case, pop the transaction stack. We need not physically free memory
1305 : : * here, since CurTransactionContext is about to be emptied anyway
1306 : : * (if aborting). Beware of the possibility of aborting the same nesting
1307 : : * level twice, though.
1308 : : */
1309 : : void
7710 tgl@sss.pgh.pa.us 1310 : 9093 : AtEOSubXact_Inval(bool isCommit)
1311 : : {
1312 : : int my_level;
1313 : : TransInvalidationInfo *myInfo;
1314 : :
1315 : : /*
1316 : : * Successful inplace update must clear this, but we clear it on abort.
1317 : : * Inplace updates allocate this in CurrentMemoryContext, which has
1318 : : * lifespan <= subtransaction lifespan. Hence, don't free it explicitly.
1319 : : */
316 noah@leadboat.com 1320 [ + + ]: 9093 : if (isCommit)
1321 [ - + ]: 4426 : Assert(inplaceInvalInfo == NULL);
1322 : : else
1323 : 4667 : inplaceInvalInfo = NULL;
1324 : :
1325 : : /* Quick exit if no transactional messages. */
1326 : 9093 : myInfo = transInvalInfo;
3965 rhaas@postgresql.org 1327 [ + + ]: 9093 : if (myInfo == NULL)
1328 : 8289 : return;
1329 : :
1330 : : /* Also bail out quickly if messages are not for this level. */
1331 : 804 : my_level = GetCurrentTransactionNestLevel();
1332 [ + + ]: 804 : if (myInfo->my_level != my_level)
1333 : : {
1334 [ - + ]: 675 : Assert(myInfo->my_level < my_level);
1335 : 675 : return;
1336 : : }
1337 : :
1338 [ + + ]: 129 : if (isCommit)
1339 : : {
1340 : : /* If CurrentCmdInvalidMsgs still has anything, fix it */
7737 tgl@sss.pgh.pa.us 1341 : 46 : CommandEndInvalidationMessages();
1342 : :
1343 : : /*
1344 : : * We create invalidation stack entries lazily, so the parent might
1345 : : * not have one. Instead of creating one, moving all the data over,
1346 : : * and then freeing our own, we can just adjust the level of our own
1347 : : * entry.
1348 : : */
3965 rhaas@postgresql.org 1349 [ + + - + ]: 46 : if (myInfo->parent == NULL || myInfo->parent->my_level < my_level - 1)
1350 : : {
1351 : 35 : myInfo->my_level--;
1352 : 35 : return;
1353 : : }
1354 : :
1355 : : /*
1356 : : * Pass up my inval messages to parent. Notice that we stick them in
1357 : : * PriorCmdInvalidMsgs, not CurrentCmdInvalidMsgs, since they've
1358 : : * already been locally processed. (This would trigger the Assert in
1359 : : * AppendInvalidationMessageSubGroup if the parent's
1360 : : * CurrentCmdInvalidMsgs isn't empty; but we already checked that in
1361 : : * PrepareInvalidationState.)
1362 : : */
7737 tgl@sss.pgh.pa.us 1363 : 11 : AppendInvalidationMessages(&myInfo->parent->PriorCmdInvalidMsgs,
1364 : : &myInfo->PriorCmdInvalidMsgs);
1365 : :
1366 : : /* Must readjust parent's CurrentCmdInvalidMsgs indexes now */
316 noah@leadboat.com 1367 : 11 : SetGroupToFollow(&myInfo->parent->ii.CurrentCmdInvalidMsgs,
1368 : : &myInfo->parent->PriorCmdInvalidMsgs);
1369 : :
1370 : : /* Pending relcache inval becomes parent's problem too */
1371 [ - + ]: 11 : if (myInfo->ii.RelcacheInitFileInval)
316 noah@leadboat.com 1372 :UBC 0 : myInfo->parent->ii.RelcacheInitFileInval = true;
1373 : :
1374 : : /* Pop the transaction state stack */
7670 tgl@sss.pgh.pa.us 1375 :CBC 11 : transInvalInfo = myInfo->parent;
1376 : :
1377 : : /* Need not free anything else explicitly */
1378 : 11 : pfree(myInfo);
1379 : : }
1380 : : else
1381 : : {
7737 1382 : 83 : ProcessInvalidationMessages(&myInfo->PriorCmdInvalidMsgs,
1383 : : LocalExecuteInvalidationMessage);
1384 : :
1385 : : /* Pop the transaction state stack */
7670 1386 : 83 : transInvalInfo = myInfo->parent;
1387 : :
1388 : : /* Need not free anything else explicitly */
1389 : 83 : pfree(myInfo);
1390 : : }
1391 : : }
1392 : :
1393 : : /*
1394 : : * CommandEndInvalidationMessages
1395 : : * Process queued-up invalidation messages at end of one command
1396 : : * in a transaction.
1397 : : *
1398 : : * Here, we send no messages to the shared queue, since we don't know yet if
1399 : : * we will commit. We do need to locally process the CurrentCmdInvalidMsgs
1400 : : * list, so as to flush our caches of any entries we have outdated in the
1401 : : * current command. We then move the current-cmd list over to become part
1402 : : * of the prior-cmds list.
1403 : : *
1404 : : * Note:
1405 : : * This should be called during CommandCounterIncrement(),
1406 : : * after we have advanced the command ID.
1407 : : */
1408 : : void
7737 1409 : 595490 : CommandEndInvalidationMessages(void)
1410 : : {
1411 : : /*
1412 : : * You might think this shouldn't be called outside any transaction, but
1413 : : * bootstrap does it, and also ABORT issued when not in a transaction. So
1414 : : * just quietly return if no state to work on.
1415 : : */
1416 [ + + ]: 595490 : if (transInvalInfo == NULL)
1417 : 180752 : return;
1418 : :
316 noah@leadboat.com 1419 : 414738 : ProcessInvalidationMessages(&transInvalInfo->ii.CurrentCmdInvalidMsgs,
1420 : : LocalExecuteInvalidationMessage);
1421 : :
1422 : : /* WAL Log per-command invalidation messages for wal_level=logical */
1871 akapila@postgresql.o 1423 [ + + ]: 414735 : if (XLogLogicalInfoActive())
1424 : 4407 : LogLogicalInvalidations();
1425 : :
7737 tgl@sss.pgh.pa.us 1426 : 414735 : AppendInvalidationMessages(&transInvalInfo->PriorCmdInvalidMsgs,
316 noah@leadboat.com 1427 : 414735 : &transInvalInfo->ii.CurrentCmdInvalidMsgs);
1428 : : }
1429 : :
1430 : :
1431 : : /*
1432 : : * CacheInvalidateHeapTupleCommon
1433 : : * Common logic for end-of-command and inplace variants.
1434 : : */
1435 : : static void
1436 : 11137572 : CacheInvalidateHeapTupleCommon(Relation relation,
1437 : : HeapTuple tuple,
1438 : : HeapTuple newtuple,
1439 : : InvalidationInfo *(*prepare_callback) (void))
1440 : : {
1441 : : InvalidationInfo *info;
1442 : : Oid tupleRelId;
1443 : : Oid databaseId;
1444 : : Oid relationId;
1445 : :
1446 : : /* PrepareToInvalidateCacheTuple() needs relcache */
142 1447 : 11137572 : AssertCouldGetRelation();
1448 : :
1449 : : /* Do nothing during bootstrap */
5135 tgl@sss.pgh.pa.us 1450 [ + + ]: 11137572 : if (IsBootstrapProcessingMode())
1451 : 656750 : return;
1452 : :
1453 : : /*
1454 : : * We only need to worry about invalidation for tuples that are in system
1455 : : * catalogs; user-relation tuples are never in catcaches and can't affect
1456 : : * the relcache either.
1457 : : */
4300 rhaas@postgresql.org 1458 [ + + ]: 10480822 : if (!IsCatalogRelation(relation))
5135 tgl@sss.pgh.pa.us 1459 : 8417901 : return;
1460 : :
1461 : : /*
1462 : : * IsCatalogRelation() will return true for TOAST tables of system
1463 : : * catalogs, but we don't care about those, either.
1464 : : */
1465 [ + + ]: 2062921 : if (IsToastRelation(relation))
1466 : 17253 : return;
1467 : :
1468 : : /* Allocate any required resources. */
316 noah@leadboat.com 1469 : 2045668 : info = prepare_callback();
1470 : :
1471 : : /*
1472 : : * First let the catcache do its thing
1473 : : */
4449 rhaas@postgresql.org 1474 : 2045668 : tupleRelId = RelationGetRelid(relation);
1475 [ + + ]: 2045668 : if (RelationInvalidatesSnapshotsOnly(tupleRelId))
1476 : : {
1477 [ + + ]: 526727 : databaseId = IsSharedRelation(tupleRelId) ? InvalidOid : MyDatabaseId;
316 noah@leadboat.com 1478 : 526727 : RegisterSnapshotInvalidation(info, databaseId, tupleRelId);
1479 : : }
1480 : : else
4449 rhaas@postgresql.org 1481 : 1518941 : PrepareToInvalidateCacheTuple(relation, tuple, newtuple,
1482 : : RegisterCatcacheInvalidation,
1483 : : (void *) info);
1484 : :
1485 : : /*
1486 : : * Now, is this tuple one of the primary definers of a relcache entry? See
1487 : : * comments in file header for deeper explanation.
1488 : : *
1489 : : * Note we ignore newtuple here; we assume an update cannot move a tuple
1490 : : * from being part of one relcache entry to being part of another.
1491 : : */
5135 tgl@sss.pgh.pa.us 1492 [ + + ]: 2045668 : if (tupleRelId == RelationRelationId)
1493 : : {
1494 : 273498 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(tuple);
1495 : :
2482 andres@anarazel.de 1496 : 273498 : relationId = classtup->oid;
5135 tgl@sss.pgh.pa.us 1497 [ + + ]: 273498 : if (classtup->relisshared)
1498 : 9482 : databaseId = InvalidOid;
1499 : : else
1500 : 264016 : databaseId = MyDatabaseId;
1501 : : }
1502 [ + + ]: 1772170 : else if (tupleRelId == AttributeRelationId)
1503 : : {
1504 : 558400 : Form_pg_attribute atttup = (Form_pg_attribute) GETSTRUCT(tuple);
1505 : :
1506 : 558400 : relationId = atttup->attrelid;
1507 : :
1508 : : /*
1509 : : * KLUGE ALERT: we always send the relcache event with MyDatabaseId,
1510 : : * even if the rel in question is shared (which we can't easily tell).
1511 : : * This essentially means that only backends in this same database
1512 : : * will react to the relcache flush request. This is in fact
1513 : : * appropriate, since only those backends could see our pg_attribute
1514 : : * change anyway. It looks a bit ugly though. (In practice, shared
1515 : : * relations can't have schema changes after bootstrap, so we should
1516 : : * never come here for a shared rel anyway.)
1517 : : */
1518 : 558400 : databaseId = MyDatabaseId;
1519 : : }
1520 [ + + ]: 1213770 : else if (tupleRelId == IndexRelationId)
1521 : : {
1522 : 32379 : Form_pg_index indextup = (Form_pg_index) GETSTRUCT(tuple);
1523 : :
1524 : : /*
1525 : : * When a pg_index row is updated, we should send out a relcache inval
1526 : : * for the index relation. As above, we don't know the shared status
1527 : : * of the index, but in practice it doesn't matter since indexes of
1528 : : * shared catalogs can't have such updates.
1529 : : */
1530 : 32379 : relationId = indextup->indexrelid;
1531 : 32379 : databaseId = MyDatabaseId;
1532 : : }
2420 alvherre@alvh.no-ip. 1533 [ + + ]: 1181391 : else if (tupleRelId == ConstraintRelationId)
1534 : : {
1535 : 42126 : Form_pg_constraint constrtup = (Form_pg_constraint) GETSTRUCT(tuple);
1536 : :
1537 : : /*
1538 : : * Foreign keys are part of relcache entries, too, so send out an
1539 : : * inval for the table that the FK applies to.
1540 : : */
1541 [ + + ]: 42126 : if (constrtup->contype == CONSTRAINT_FOREIGN &&
1542 [ + - ]: 4301 : OidIsValid(constrtup->conrelid))
1543 : : {
1544 : 4301 : relationId = constrtup->conrelid;
1545 : 4301 : databaseId = MyDatabaseId;
1546 : : }
1547 : : else
1548 : 37825 : return;
1549 : : }
1550 : : else
5135 tgl@sss.pgh.pa.us 1551 : 1139265 : return;
1552 : :
1553 : : /*
1554 : : * Yes. We need to register a relcache invalidation event.
1555 : : */
316 noah@leadboat.com 1556 : 868578 : RegisterRelcacheInvalidation(info, databaseId, relationId);
1557 : : }
1558 : :
1559 : : /*
1560 : : * CacheInvalidateHeapTuple
1561 : : * Register the given tuple for invalidation at end of command
1562 : : * (ie, current command is creating or outdating this tuple) and end of
1563 : : * transaction. Also, detect whether a relcache invalidation is implied.
1564 : : *
1565 : : * For an insert or delete, tuple is the target tuple and newtuple is NULL.
1566 : : * For an update, we are called just once, with tuple being the old tuple
1567 : : * version and newtuple the new version. This allows avoidance of duplicate
1568 : : * effort during an update.
1569 : : */
1570 : : void
1571 : 11046576 : CacheInvalidateHeapTuple(Relation relation,
1572 : : HeapTuple tuple,
1573 : : HeapTuple newtuple)
1574 : : {
1575 : 11046576 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1576 : : PrepareInvalidationState);
1577 : 11046576 : }
1578 : :
1579 : : /*
1580 : : * CacheInvalidateHeapTupleInplace
1581 : : * Register the given tuple for nontransactional invalidation pertaining
1582 : : * to an inplace update. Also, detect whether a relcache invalidation is
1583 : : * implied.
1584 : : *
1585 : : * Like CacheInvalidateHeapTuple(), but for inplace updates.
1586 : : */
1587 : : void
1588 : 90996 : CacheInvalidateHeapTupleInplace(Relation relation,
1589 : : HeapTuple tuple,
1590 : : HeapTuple newtuple)
1591 : : {
1592 : 90996 : CacheInvalidateHeapTupleCommon(relation, tuple, newtuple,
1593 : : PrepareInplaceInvalidationState);
9371 inoue@tpf.co.jp 1594 : 90996 : }
1595 : :
1596 : : /*
1597 : : * CacheInvalidateCatalog
1598 : : * Register invalidation of the whole content of a system catalog.
1599 : : *
1600 : : * This is normally used in VACUUM FULL/CLUSTER, where we haven't so much
1601 : : * changed any tuples as moved them around. Some uses of catcache entries
1602 : : * expect their TIDs to be correct, so we have to blow away the entries.
1603 : : *
1604 : : * Note: we expect caller to verify that the rel actually is a system
1605 : : * catalog. If it isn't, no great harm is done, just a wasted sinval message.
1606 : : */
1607 : : void
5690 tgl@sss.pgh.pa.us 1608 : 100 : CacheInvalidateCatalog(Oid catalogId)
1609 : : {
1610 : : Oid databaseId;
1611 : :
1612 [ + + ]: 100 : if (IsSharedRelation(catalogId))
1613 : 18 : databaseId = InvalidOid;
1614 : : else
1615 : 82 : databaseId = MyDatabaseId;
1616 : :
316 noah@leadboat.com 1617 : 100 : RegisterCatalogInvalidation(PrepareInvalidationState(),
1618 : : databaseId, catalogId);
5690 tgl@sss.pgh.pa.us 1619 : 100 : }
1620 : :
1621 : : /*
1622 : : * CacheInvalidateRelcache
1623 : : * Register invalidation of the specified relation's relcache entry
1624 : : * at end of command.
1625 : : *
1626 : : * This is used in places that need to force relcache rebuild but aren't
1627 : : * changing any of the tuples recognized as contributors to the relcache
1628 : : * entry by CacheInvalidateHeapTuple. (An example is dropping an index.)
1629 : : */
1630 : : void
7879 1631 : 75985 : CacheInvalidateRelcache(Relation relation)
1632 : : {
1633 : : Oid databaseId;
1634 : : Oid relationId;
1635 : :
1636 : 75985 : relationId = RelationGetRelid(relation);
1637 [ + + ]: 75985 : if (relation->rd_rel->relisshared)
1638 : 3484 : databaseId = InvalidOid;
1639 : : else
1640 : 72501 : databaseId = MyDatabaseId;
1641 : :
316 noah@leadboat.com 1642 : 75985 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1643 : : databaseId, relationId);
7879 tgl@sss.pgh.pa.us 1644 : 75985 : }
1645 : :
1646 : : /*
1647 : : * CacheInvalidateRelcacheAll
1648 : : * Register invalidation of the whole relcache at the end of command.
1649 : : *
1650 : : * This is used by alter publication as changes in publications may affect
1651 : : * large number of tables.
1652 : : */
1653 : : void
3152 peter_e@gmx.net 1654 : 87 : CacheInvalidateRelcacheAll(void)
1655 : : {
316 noah@leadboat.com 1656 : 87 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1657 : : InvalidOid, InvalidOid);
3152 peter_e@gmx.net 1658 : 87 : }
1659 : :
1660 : : /*
1661 : : * CacheInvalidateRelcacheByTuple
1662 : : * As above, but relation is identified by passing its pg_class tuple.
1663 : : */
1664 : : void
7879 tgl@sss.pgh.pa.us 1665 : 35892 : CacheInvalidateRelcacheByTuple(HeapTuple classTuple)
1666 : : {
1667 : 35892 : Form_pg_class classtup = (Form_pg_class) GETSTRUCT(classTuple);
1668 : : Oid databaseId;
1669 : : Oid relationId;
1670 : :
2482 andres@anarazel.de 1671 : 35892 : relationId = classtup->oid;
7879 tgl@sss.pgh.pa.us 1672 [ + + ]: 35892 : if (classtup->relisshared)
1673 : 979 : databaseId = InvalidOid;
1674 : : else
1675 : 34913 : databaseId = MyDatabaseId;
316 noah@leadboat.com 1676 : 35892 : RegisterRelcacheInvalidation(PrepareInvalidationState(),
1677 : : databaseId, relationId);
9371 inoue@tpf.co.jp 1678 : 35892 : }
1679 : :
1680 : : /*
1681 : : * CacheInvalidateRelcacheByRelid
1682 : : * As above, but relation is identified by passing its OID.
1683 : : * This is the least efficient of the three options; use one of
1684 : : * the above routines if you have a Relation or pg_class tuple.
1685 : : */
1686 : : void
7793 tgl@sss.pgh.pa.us 1687 : 14333 : CacheInvalidateRelcacheByRelid(Oid relid)
1688 : : {
1689 : : HeapTuple tup;
1690 : :
5683 rhaas@postgresql.org 1691 : 14333 : tup = SearchSysCache1(RELOID, ObjectIdGetDatum(relid));
7793 tgl@sss.pgh.pa.us 1692 [ - + ]: 14333 : if (!HeapTupleIsValid(tup))
7793 tgl@sss.pgh.pa.us 1693 [ # # ]:UBC 0 : elog(ERROR, "cache lookup failed for relation %u", relid);
7793 tgl@sss.pgh.pa.us 1694 :CBC 14333 : CacheInvalidateRelcacheByTuple(tup);
1695 : 14333 : ReleaseSysCache(tup);
1696 : 14333 : }
1697 : :
1698 : : /*
1699 : : * CacheInvalidateRelSync
1700 : : * Register invalidation of the cache in logical decoding output plugin
1701 : : * for a database.
1702 : : *
1703 : : * This type of invalidation message is used for the specific purpose of output
1704 : : * plugins. Processes which do not decode WALs would do nothing even when it
1705 : : * receives the message.
1706 : : */
1707 : : void
177 akapila@postgresql.o 1708 : 6 : CacheInvalidateRelSync(Oid relid)
1709 : : {
1710 : 6 : RegisterRelsyncInvalidation(PrepareInvalidationState(),
1711 : : MyDatabaseId, relid);
1712 : 6 : }
1713 : :
1714 : : /*
1715 : : * CacheInvalidateRelSyncAll
1716 : : * Register invalidation of the whole cache in logical decoding output
1717 : : * plugin.
1718 : : */
1719 : : void
1720 : 3 : CacheInvalidateRelSyncAll(void)
1721 : : {
1722 : 3 : CacheInvalidateRelSync(InvalidOid);
1723 : 3 : }
1724 : :
1725 : : /*
1726 : : * CacheInvalidateSmgr
1727 : : * Register invalidation of smgr references to a physical relation.
1728 : : *
1729 : : * Sending this type of invalidation msg forces other backends to close open
1730 : : * smgr entries for the rel. This should be done to flush dangling open-file
1731 : : * references when the physical rel is being dropped or truncated. Because
1732 : : * these are nontransactional (i.e., not-rollback-able) operations, we just
1733 : : * send the inval message immediately without any queuing.
1734 : : *
1735 : : * Note: in most cases there will have been a relcache flush issued against
1736 : : * the rel at the logical level. We need a separate smgr-level flush because
1737 : : * it is possible for backends to have open smgr entries for rels they don't
1738 : : * have a relcache entry for, e.g. because the only thing they ever did with
1739 : : * the rel is write out dirty shared buffers.
1740 : : *
1741 : : * Note: because these messages are nontransactional, they won't be captured
1742 : : * in commit/abort WAL entries. Instead, calls to CacheInvalidateSmgr()
1743 : : * should happen in low-level smgr.c routines, which are executed while
1744 : : * replaying WAL as well as when creating it.
1745 : : *
1746 : : * Note: In order to avoid bloating SharedInvalidationMessage, we store only
1747 : : * three bytes of the ProcNumber using what would otherwise be padding space.
1748 : : * Thus, the maximum possible ProcNumber is 2^23-1.
1749 : : */
1750 : : void
1158 rhaas@postgresql.org 1751 : 48837 : CacheInvalidateSmgr(RelFileLocatorBackend rlocator)
1752 : : {
1753 : : SharedInvalidationMessage msg;
1754 : :
1755 : : /* verify optimization stated above stays valid */
1756 : : StaticAssertStmt(MAX_BACKENDS_BITS <= 23,
1757 : : "MAX_BACKENDS_BITS is too big for inval.c");
1758 : :
5694 tgl@sss.pgh.pa.us 1759 : 48837 : msg.sm.id = SHAREDINVALSMGR_ID;
1158 rhaas@postgresql.org 1760 : 48837 : msg.sm.backend_hi = rlocator.backend >> 16;
1761 : 48837 : msg.sm.backend_lo = rlocator.backend & 0xffff;
1074 1762 : 48837 : msg.sm.rlocator = rlocator.locator;
1763 : : /* check AddCatcacheInvalidationMessage() for an explanation */
1764 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1765 : :
5694 tgl@sss.pgh.pa.us 1766 : 48837 : SendSharedInvalidMessages(&msg, 1);
1767 : 48837 : }
1768 : :
1769 : : /*
1770 : : * CacheInvalidateRelmap
1771 : : * Register invalidation of the relation mapping for a database,
1772 : : * or for the shared catalogs if databaseId is zero.
1773 : : *
1774 : : * Sending this type of invalidation msg forces other backends to re-read
1775 : : * the indicated relation mapping file. It is also necessary to send a
1776 : : * relcache inval for the specific relations whose mapping has been altered,
1777 : : * else the relcache won't get updated with the new filenode data.
1778 : : *
1779 : : * Note: because these messages are nontransactional, they won't be captured
1780 : : * in commit/abort WAL entries. Instead, calls to CacheInvalidateRelmap()
1781 : : * should happen in low-level relmapper.c routines, which are executed while
1782 : : * replaying WAL as well as when creating it.
1783 : : */
1784 : : void
5690 1785 : 179 : CacheInvalidateRelmap(Oid databaseId)
1786 : : {
1787 : : SharedInvalidationMessage msg;
1788 : :
1789 : 179 : msg.rm.id = SHAREDINVALRELMAP_ID;
1790 : 179 : msg.rm.dbId = databaseId;
1791 : : /* check AddCatcacheInvalidationMessage() for an explanation */
1792 : : VALGRIND_MAKE_MEM_DEFINED(&msg, sizeof(msg));
1793 : :
1794 : 179 : SendSharedInvalidMessages(&msg, 1);
1795 : 179 : }
1796 : :
1797 : :
1798 : : /*
1799 : : * CacheRegisterSyscacheCallback
1800 : : * Register the specified function to be called for all future
1801 : : * invalidation events in the specified cache. The cache ID and the
1802 : : * hash value of the tuple being invalidated will be passed to the
1803 : : * function.
1804 : : *
1805 : : * NOTE: Hash value zero will be passed if a cache reset request is received.
1806 : : * In this case the called routines should flush all cached state.
1807 : : * Yes, there's a possibility of a false match to zero, but it doesn't seem
1808 : : * worth troubling over, especially since most of the current callees just
1809 : : * flush all cached state anyway.
1810 : : */
1811 : : void
8531 1812 : 246861 : CacheRegisterSyscacheCallback(int cacheid,
1813 : : SyscacheCallbackFunction func,
1814 : : Datum arg)
1815 : : {
3039 1816 [ + - - + ]: 246861 : if (cacheid < 0 || cacheid >= SysCacheSize)
3039 tgl@sss.pgh.pa.us 1817 [ # # ]:UBC 0 : elog(FATAL, "invalid cache ID: %d", cacheid);
6206 tgl@sss.pgh.pa.us 1818 [ - + ]:CBC 246861 : if (syscache_callback_count >= MAX_SYSCACHE_CALLBACKS)
6206 tgl@sss.pgh.pa.us 1819 [ # # ]:UBC 0 : elog(FATAL, "out of syscache_callback_list slots");
1820 : :
3039 tgl@sss.pgh.pa.us 1821 [ + + ]:CBC 246861 : if (syscache_callback_links[cacheid] == 0)
1822 : : {
1823 : : /* first callback for this cache */
1824 : 174778 : syscache_callback_links[cacheid] = syscache_callback_count + 1;
1825 : : }
1826 : : else
1827 : : {
1828 : : /* add to end of chain, so that older callbacks are called first */
1829 : 72083 : int i = syscache_callback_links[cacheid] - 1;
1830 : :
1831 [ + + ]: 86819 : while (syscache_callback_list[i].link > 0)
1832 : 14736 : i = syscache_callback_list[i].link - 1;
1833 : 72083 : syscache_callback_list[i].link = syscache_callback_count + 1;
1834 : : }
1835 : :
6206 1836 : 246861 : syscache_callback_list[syscache_callback_count].id = cacheid;
3039 1837 : 246861 : syscache_callback_list[syscache_callback_count].link = 0;
6206 1838 : 246861 : syscache_callback_list[syscache_callback_count].function = func;
1839 : 246861 : syscache_callback_list[syscache_callback_count].arg = arg;
1840 : :
1841 : 246861 : ++syscache_callback_count;
8531 1842 : 246861 : }
1843 : :
1844 : : /*
1845 : : * CacheRegisterRelcacheCallback
1846 : : * Register the specified function to be called for all future
1847 : : * relcache invalidation events. The OID of the relation being
1848 : : * invalidated will be passed to the function.
1849 : : *
1850 : : * NOTE: InvalidOid will be passed if a cache reset request is received.
1851 : : * In this case the called routines should flush all cached state.
1852 : : */
1853 : : void
6206 1854 : 19695 : CacheRegisterRelcacheCallback(RelcacheCallbackFunction func,
1855 : : Datum arg)
1856 : : {
1857 [ - + ]: 19695 : if (relcache_callback_count >= MAX_RELCACHE_CALLBACKS)
6206 tgl@sss.pgh.pa.us 1858 [ # # ]:UBC 0 : elog(FATAL, "out of relcache_callback_list slots");
1859 : :
6206 tgl@sss.pgh.pa.us 1860 :CBC 19695 : relcache_callback_list[relcache_callback_count].function = func;
1861 : 19695 : relcache_callback_list[relcache_callback_count].arg = arg;
1862 : :
1863 : 19695 : ++relcache_callback_count;
8531 1864 : 19695 : }
1865 : :
1866 : : /*
1867 : : * CacheRegisterRelSyncCallback
1868 : : * Register the specified function to be called for all future
1869 : : * relsynccache invalidation events.
1870 : : *
1871 : : * This function is intended to be call from the logical decoding output
1872 : : * plugins.
1873 : : */
1874 : : void
177 akapila@postgresql.o 1875 : 399 : CacheRegisterRelSyncCallback(RelSyncCallbackFunction func,
1876 : : Datum arg)
1877 : : {
1878 [ - + ]: 399 : if (relsync_callback_count >= MAX_RELSYNC_CALLBACKS)
177 akapila@postgresql.o 1879 [ # # ]:UBC 0 : elog(FATAL, "out of relsync_callback_list slots");
1880 : :
177 akapila@postgresql.o 1881 :CBC 399 : relsync_callback_list[relsync_callback_count].function = func;
1882 : 399 : relsync_callback_list[relsync_callback_count].arg = arg;
1883 : :
1884 : 399 : ++relsync_callback_count;
1885 : 399 : }
1886 : :
1887 : : /*
1888 : : * CallSyscacheCallbacks
1889 : : *
1890 : : * This is exported so that CatalogCacheFlushCatalog can call it, saving
1891 : : * this module from knowing which catcache IDs correspond to which catalogs.
1892 : : */
1893 : : void
5135 tgl@sss.pgh.pa.us 1894 : 11414849 : CallSyscacheCallbacks(int cacheid, uint32 hashvalue)
1895 : : {
1896 : : int i;
1897 : :
3039 1898 [ + - - + ]: 11414849 : if (cacheid < 0 || cacheid >= SysCacheSize)
3039 tgl@sss.pgh.pa.us 1899 [ # # ]:UBC 0 : elog(ERROR, "invalid cache ID: %d", cacheid);
1900 : :
3039 tgl@sss.pgh.pa.us 1901 :CBC 11414849 : i = syscache_callback_links[cacheid] - 1;
1902 [ + + ]: 13019033 : while (i >= 0)
1903 : : {
5690 1904 : 1604184 : struct SYSCACHECALLBACK *ccitem = syscache_callback_list + i;
1905 : :
3039 1906 [ - + ]: 1604184 : Assert(ccitem->id == cacheid);
2921 peter_e@gmx.net 1907 : 1604184 : ccitem->function(ccitem->arg, cacheid, hashvalue);
3039 tgl@sss.pgh.pa.us 1908 : 1604184 : i = ccitem->link - 1;
1909 : : }
5690 1910 : 11414849 : }
1911 : :
1912 : : /*
1913 : : * CallSyscacheCallbacks
1914 : : */
1915 : : void
177 akapila@postgresql.o 1916 : 31 : CallRelSyncCallbacks(Oid relid)
1917 : : {
1918 [ + + ]: 52 : for (int i = 0; i < relsync_callback_count; i++)
1919 : : {
1920 : 21 : struct RELSYNCCALLBACK *ccitem = relsync_callback_list + i;
1921 : :
1922 : 21 : ccitem->function(ccitem->arg, relid);
1923 : : }
1924 : 31 : }
1925 : :
1926 : : /*
1927 : : * LogLogicalInvalidations
1928 : : *
1929 : : * Emit WAL for invalidations caused by the current command.
1930 : : *
1931 : : * This is currently only used for logging invalidations at the command end
1932 : : * or at commit time if any invalidations are pending.
1933 : : */
1934 : : void
1482 tgl@sss.pgh.pa.us 1935 : 16430 : LogLogicalInvalidations(void)
1936 : : {
1937 : : xl_xact_invals xlrec;
1938 : : InvalidationMsgsGroup *group;
1939 : : int nmsgs;
1940 : :
1941 : : /* Quick exit if we haven't done anything with invalidation messages. */
1871 akapila@postgresql.o 1942 [ + + ]: 16430 : if (transInvalInfo == NULL)
1943 : 10297 : return;
1944 : :
316 noah@leadboat.com 1945 : 6133 : group = &transInvalInfo->ii.CurrentCmdInvalidMsgs;
1482 tgl@sss.pgh.pa.us 1946 : 6133 : nmsgs = NumMessagesInGroup(group);
1947 : :
1871 akapila@postgresql.o 1948 [ + + ]: 6133 : if (nmsgs > 0)
1949 : : {
1950 : : /* prepare record */
1951 : 4840 : memset(&xlrec, 0, MinSizeOfXactInvals);
1952 : 4840 : xlrec.nmsgs = nmsgs;
1953 : :
1954 : : /* perform insertion */
1955 : 4840 : XLogBeginInsert();
207 peter@eisentraut.org 1956 : 4840 : XLogRegisterData(&xlrec, MinSizeOfXactInvals);
1482 tgl@sss.pgh.pa.us 1957 [ + + ]: 4840 : ProcessMessageSubGroupMulti(group, CatCacheMsgs,
1958 : : XLogRegisterData(msgs,
1959 : : n * sizeof(SharedInvalidationMessage)));
1960 [ + + ]: 4840 : ProcessMessageSubGroupMulti(group, RelCacheMsgs,
1961 : : XLogRegisterData(msgs,
1962 : : n * sizeof(SharedInvalidationMessage)));
1871 akapila@postgresql.o 1963 : 4840 : XLogInsert(RM_XACT_ID, XLOG_XACT_INVALIDATIONS);
1964 : : }
1965 : : }
|