Skip to content

Optionally record a plan_id in PlannedStmt to identify plan shape #12

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 149 commits into
base: master
Choose a base branch
from

Conversation

lfittl
Copy link
Owner

@lfittl lfittl commented Dec 31, 2024

No description provided.

bmomjian and others added 4 commits January 1, 2025 11:21
Backpatch-through: 13
CHUNKHDRSZ was defined as 16 bytes, which was true when that code went in,
but since c6e0fe1, 8 is a more accurate value.  Here we adjust it to use
sizeof(MemoryChunk), which is normally 8, or 16 for cassert builds.

c6e0fe1 first appeared in v16, so this is technically wrong in v16 up
to master, but let's apply this only to master as adjusting this does
influence the estimated number of batches in the aggregate costing code
and we don't want to cause plan instability in released versions.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/CAApHDvpMpRQvsTqZo3FinXkgytwxwF8sCyZm83xDj-1s_hLe+w@mail.gmail.com
When looking up statistical data about an expression, we do not need
to concern ourselves with the outer joins that could null the
Vars/PHVs contained in the expression.  Accounting for nullingrels in
the expression could cause estimate_num_groups to count the same Var
multiple times if it's marked with different nullingrels.  This is
incorrect, and could lead to "ERROR:  corrupt MVNDistinct entry" when
searching for multivariate n-distinct.

Furthermore, the nullingrels could prevent us from matching an
expression to expressional index columns or to the expressions in
extended statistics, leading to inaccurate estimates.

To fix, strip out all the nullingrels from the expression before we
look up statistical data about it.  There is one ensuing plan change
in the regression tests, but it looks reasonable and does not
compromise its original purpose.

This patch could result in plan changes, but it fixes an actual bug,
so back-patch to v16 where the outer-join-aware-Var infrastructure was
introduced.

Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-2Z4k+nFTiZe0Qbu5n8juUWenDAtMzi98bAZQtwHx0-w@mail.gmail.com
@lfittl lfittl force-pushed the lfittl-planid-tracking branch 4 times, most recently from 78b537c to 9840edb Compare January 2, 2025 20:26
adunstan and others added 20 commits January 3, 2025 10:36
Slightly faulty logic in the original jsonb code (commit d9134d0)
results in an empty top level array sorting less than a json null. We
can't change the sort order now since it would affect btree indexes over
jsonb, so document the anomaly.

Backpatch to all live branches (13 .. 17)

In master, also add a code comment noting the anomaly.

Reported-by: Yan Chengpen
Reviewed-by: Jian He

Discussion: https://postgr.es/m/OSBPR01MB45199DD8DA2D1CECD50518188E272@OSBPR01MB4519.jpnprd01.prod.outlook.com
The test for "decltype" as a variant of "typeof" apparently never
worked (see also commit 3582b22), so remove it.

Discussion: https://www.postgresql.org/message-id/flat/795b1c54-c64a-47f9-8fa3-880dcab59975%40eisentraut.org
Typically only one iterator is present at any time, so it's overkill
to devote an entire context for this. Get rid of it and use the
caller's context.

This is tidy-up work, so no backpatch in this form. However, a
hypothetical extension to v17 that tried to start iteration from
an attaching backend would result in a crash, so that'll be fixed
separately in a way that doesn't change behavior in core.

Patch by me, reported and reviewed by Masahiko Sawada

Discussion: https://postgr.es/m/CAD21AoBB2U47V=F+wQRB1bERov_of5=BOZGaybjaV8FLQyqG3Q@mail.gmail.com
Previously, this was notionally used only for the entry point of the
tree and as a convenient parent for other contexts.

For shared memory, the creator previously allocated the entry point
in this context, but attaching backends didn't have access to that,
so they just used the caller's context. For the sake of consistency,
allocate every instance of an entry point in the caller's context.

For local memory, allocate the control object in the caller's context
as well. This commit also makes the "leaf context" the notional parent
of the child contexts used for nodes, so it's a bit of a misnomer,
but a future commit will make the node contexts independent rather
than children, so leave it this way for now to avoid code churn.

The memory context parameter for RT_CREATE is now unused in the case
of shared memory, so remove it and adjust callers to match.

In passing, remove unused "context" member from struct TidStore,
which seems to have been an oversight.

Reviewed by Masahiko Sawada

Discussion: https://postgr.es/m/CANWCAZZDCo4k5oURg_pPxM6+WZ1oiG=sqgjmQiELuyP0Vtrwig@mail.gmail.com
Previously, it would not have worked for a caller to pass a slab
context, since it would have been used for other things which likely
had incompatible size. In an attempt to be helpful and avoid possible
space wastage due to aset's power-of-two rounding, RT_CREATE would
create an additional slab context if the value type was fixed-length
and larger than pointer size. The problem was, we have since added
the bump context type, and the generation context was a possibility as
well, so silently overriding the caller's choice may actually be worse.

Commit e8a6f1f arranged so that the caller-provided context is
used only for leaves, so it's safe for the caller to use slab here
if they wish. As demonstration, use slab in one of the radix tree
regression tests.

Reviewed by Masahiko Sawada

Discussion: https://postgr.es/m/CANWCAZZDCo4k5oURg_pPxM6+WZ1oiG=sqgjmQiELuyP0Vtrwig@mail.gmail.com
VERBOSE messages from ANALYZE, CLUSTER, REINDEX, and VACUUM are logged
at the INFO level, but this detail was missing from the documentation.
This commit updates the docs to mention the log level for these messages.

Author: Masahiro Ikeda
Reviewed-by: Yugo Nagata
Discussion: https://postgr.es/m/[email protected]
Replace #define YY_EXTRA_TYPE with %option extra-type.  The latter is
the way recommended by the flex manual (available since flex 2.5.34).

Reviewed-by: Heikki Linnakangas <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
These are also present in procnumber.h

Reported-by: Peter Eisentraut
Discussion: https://www.postgresql.org/message-id/[email protected]
This commit introduces a new parameter named
autovacuum_worker_slots that controls how many autovacuum worker
slots to reserve during server startup.  Modifying this new
parameter's value does require a server restart, but it should
typically be set to the upper bound of what you might realistically
need to set autovacuum_max_workers.  With that new parameter in
place, autovacuum_max_workers can now be changed with a SIGHUP
(e.g., pg_ctl reload).

If autovacuum_max_workers is set higher than
autovacuum_worker_slots, a WARNING is emitted, and the server will
only start up to autovacuum_worker_slots workers at a given time.
If autovacuum_max_workers is set to a value less than the number of
currently-running autovacuum workers, the existing workers will
continue running, but no new workers will be started until the
number of running autovacuum workers drops below
autovacuum_max_workers.

Reviewed-by: Sami Imseih, Justin Pryzby, Robert Haas, Andres Freund, Yogesh Sharma
Discussion: https://postgr.es/m/20240410212344.GA1824549%40nathanxps13
The parameter 'rel' in lookup_var_attr_stats was once used to draw an
ERROR when ANALYZE failed to acquire sufficient data to build extended
statistics.  bf2a691 changed the logic to raise a WARNING in the
caller instead.  As a result, this parameter is no longer needed and
can be removed.  Since this is a static function, we can always easily
reintroduce the parameter if it's ever needed in the future.

Author: Ilia Evdokimov
Reviewed-by: Fabrízio de Royes Mello
Discussion: https://postgr.es/m/[email protected]
This new structure relieves _bt_first from having separate calls to
_bt_start_array_keys for the serial case and parallel case.  This saves
code, and seems clearer.

Follow-up to work from commits 4e6e375 and b5ee4e5.

Author: Peter Geoghegan <[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz=XjUZjBjHJdhTvuH5MwoJObWGoM2RG2LyFg5WUdWyk=A@mail.gmail.com
Move nbtree's detection of RowCompare quals that are unsatisfiable due
to having a NULL in their first row element: rather than detecting these
cases at the point where _bt_first builds its insertion scan key, do so
earlier, during preprocessing proper.  This brings the RowCompare case
in line every other case involving an unsatisfiable-due-to-NULL qual.

nbtree now consistently detects such unsatisfiable quals -- even when
they happen to involve a key that isn't examined by _bt_first at all.
Affected cases thereby avoid useless full index scans that cannot
possibly return any matching rows.

Author: Peter Geoghegan <[email protected]>
Reviewed-By: Matthias van de Meent <[email protected]>
Discussion: https://postgr.es/m/CAH2-WzmySVXst2hFrOATC-zw1Byg1XC-jYUS314=mzuqsNwk+Q@mail.gmail.com
Commit 14e87ff needlessly added support for CONSTR_NOTNULL entries
to StoreConstraints.  It's dead code, so remove it.

To make the situation regarding constraint creation clearer, change
comments in heap_create_with_catalog, StoreConstraints, MergeAttributes
to explain which types of constraint are used on each.

Author: 何建 (Jian He) <[email protected]>
Discussion: https://postgr.es/m/CACJufxFxzqrCiUNfjJ0tQU+=nKQkQCGtGzUBude=SMOwj5VNjQ@mail.gmail.com
A couple of checks were missed by commit 962da90, so we would fail to
detect the features.

Reported-by: Юрий Соколов <[email protected]>
Discussion: https://postgr.es/m/42C25E2A-6519-4549-9F47-6B0686E83836%40postgrespro.ru
The originals are ambiguous and a bit out of style.

Reviewed-by: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Commit c758119 increased the default number of semaphores
required for autovacuum workers from 3 to 16.  Unfortunately, some
systems have very low default settings for SEMMNS, and this change
moved the minimum required for Postgres well beyond that limit (see
commit 38da053 for more details).

With this commit, initdb will lower the default value for
autovacuum_worker_slots as needed, just like it already does for
parameters such as max_connections and shared_buffers.  We test
for (max_connections / 6) slots, which conveniently has the
following properties:

* For the initial max_connections default of 100, the default of
  autovacuum_worker_slots will be 16, which is its initial default
  value specified in the documentation and in guc_tables.c.

* For the lowest possible max_connections default of 25, the
  default of autovacuum_worker_slots will be 4, which means we only
  need one additional semaphore for autovacuum workers (as compared
  to before commit c758119).  This leaves some wiggle room for
  new auxiliary workers, etc. on systems with low SEMMNS, and it
  ensures that the default number of slots will be greater than or
  equal to the default value of autovacuum_max_workers (3).

Reported-by: Tom Lane
Suggested-by: Andres Freund
Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/1346002.1736198977%40sss.pgh.pa.us
This new parameter can be used to change the minimum allowed
password length (in bytes).  Note that it has no effect if a user
supplies a pre-encrypted password.

Author: Emanuele Musella, Maurizio Boriani
Reviewed-by: Tomas Vondra, Bertrand Drouvot, Japin Li
Discussion: https://postgr.es/m/CA%2BugDNyYtHOtWCqVD3YkSVYDWD_1fO8Jm_ahsDGA5dXhbDPwrQ%40mail.gmail.com
Commit f4b54e1, which introduced macros for protocol characters,
missed updating a couple of places in postgres.c.

Author: Dave Cramer
Reviewed-by: Fabrízio de Royes Mello
Discussion: https://postgr.es/m/CADK3HHJUVBPoVOmFesPB-fN8_dYt%2BQELV2UB6jxOW2Z40qF-qw%40mail.gmail.com
Backpatch-through: 17
michaelpq and others added 27 commits January 20, 2025 09:29
XLogPageRead() checks immediately for an invalid WAL record header on a
standby, to be able to handle the case of continuation records that need
to be read across two different sources.  As written, the check was too
generic, applying to any target LSN.  Based on an analysis by Kyotaro
Horiguchi, what really matters is to make sure that the page header is
checked when attempting to read a LSN at the boundary of a segment, to
handle the case of a continuation record that spawns across multiple
pages when dealing with multiple segments, as WAL receivers are spawned
they request WAL from the beginning of a segment.  This fix has been
proposed by Kyotaro Horiguchi.

This could cause standbys to loop infinitely when dealing with a
continuation record during a timeline jump, in the case where the
contents of the record in the follow-up page are invalid.

Some regression tests are added to check such scenarios, able to
reproduce the original problem.  In the test, the contents of a
continuation record are overwritten with junk zeros on its follow-up
page, and replayed on standbys.  This is inspired by 039_end_of_wal.pl,
and is enough to show how standbys should react on promotion by not
being stuck.  Without the fix, the test would fail with a timeout.  The
test to reproduce the problem has been written by Alexander Kukushkin.

The original check has been introduced in 0668719, for a similar
problem.

Author: Kyotaro Horiguchi, Alexander Kukushkin
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAFh8B=mozC+e1wGJq0H=0O65goZju+6ab5AU7DEWCSUA2OtwDg@mail.gmail.com
Backpatch-through: 13
If a WaitEventSetWait() caller asks for multiple events, an already set
latch would previously prevent other events from being reported at the
same time.  Now, we'll also poll the kernel for other events that would
fit in the caller's output buffer with a zero wait time.  This policy
change doesn't affect callers that ask for only one event.

The main caller affected is the postmaster.  If its latch is set
extremely frequently by backends launching workers and workers exiting,
we don't want it to handle only those jobs and ignore incoming client
connections.

Back-patch to 16 where the postmaster began using the API.  The
fast-return policy changed here is older than that, but doesn't cause
any known problems in earlier releases.

Reported-by: Nathan Bossart <[email protected]>
Reviewed-by: Nathan Bossart <[email protected]>
Discussion: https://postgr.es/m/Z1n5UpAiGDmFcMmd%40nathan
This adds the C type PageData and makes the existing type Page a
pointer to it.  This follows the usual PostgreSQL C type naming scheme
of Foo/FooData pairs.  (Prior to commit ddbba3a, PageData existed
as an unrelated type.)  The type definitions are compatible, so this
doesn't change anything except some of the naming.

Discussion: https://www.postgresql.org/message-id/flat/[email protected]
This makes use of the new PageData type.

PageGetSpecialPointer() had to be turned back into a macro, because it
is used in a way that sometimes it takes const and returns const and
sometimes takes non-const and returns non-const.

Discussion: https://www.postgresql.org/message-id/flat/[email protected]
It makes more sense to put the catalog sanity check at the end of the
test rather than at the beginning, so that it can also check whatever
the tests did rather than just whatever happened before the tests.

Suggested-by: jian he <[email protected]>

Discussion: https://www.postgresql.org/message-id/flat/[email protected]
The freshly-released 2025a version of tzdata has a refined estimate
for the longitude of Manila, changing their value for LMT in
pre-standardized-timezone days.  This changes the output of one of
our test cases.  Since we need to be able to run with system tzdata
files that may or may not contain this update, we'd better stop
making that specific test.

I switched it to use Asia/Singapore, which has a roughly similar UTC
offset.  That LMT value hasn't changed in tzdb since 2003, so we can
hope that it's well established.

I also noticed that this set of make_timestamptz tests only exercises
zones east of Greenwich, which seems rather sad, and was not the
original intent AFAICS.  (We've already changed these tests once
to stabilize their results across tzdata updates, cf 66b737c;
it looks like I failed to consider the UTC-offset-sign aspect then.)
To improve that, add a test with Pacific/Honolulu.  That LMT offset
is also quite old in tzdb, so we'll cross our fingers that it doesn't
get improved.

Reported-by: Christoph Berg <[email protected]>
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
DST law changes in Paraguay.
Historical corrections for the Philippines.

Backpatch-through: 13
The two callbacks have_fixed_pending_cb and flush_fixed_cb have been
introduced in fc415ed to provide a way for fixed-numbered
statistics to control the flush of their data.  These are renamed to
respectively have_static_pending_cb and flush_static_cb.  The
restriction that these only apply to fixed-numbered stats is removed.

A follow-up patch will make use of them for backend statistics.  This
stats kind is variable-numbered, and patches are under discussion to
track WAL data for IO and backend stats which cannot use
PgStat_EntryRef->pending as pending data would be touched in critical
sections, where no memory allocation can happen.

Per discussion with Andres Freund.

Author: Bertrand Drouvot
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/66efowskppsns35v5u2m7k4sdnl7yoz5bo64tdjwq7r5lhplrz@y7dme5xwh2r5
9aea73f has added support for backend statistics, relying on
PgStat_EntryRef->pending for its data pending for flush.  This design
lacks in flexibility, because the pending list does some memory
allocation, making it unsuitable if incrementing counters in critical
sections.

Pending data of backend statistics is reworked so the implementation
does not depend on PgStat_EntryRef->pending anymore, relying on a static
area of memory to store the counters that are flushed when stats are
reported to the pgstats dshash.  An advantage of this approach is to
allow the pending data to be manipulated in critical sections; some
patches are under discussion and require that.

The pending data is tracked by PendingBackendStats, local to
pgstat_backend.c.  Two routines are introduced to allow IO statistics to
update the backend-side counters.  have_static_pending_cb and
flush_static_cb are used for the flush, instead of flush_pending_cb.

Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/66efowskppsns35v5u2m7k4sdnl7yoz5bo64tdjwq7r5lhplrz@y7dme5xwh2r5
This commit refactors ExecScan() by moving its tuple-fetching,
filtering, and projection logic into an inline-able function,
ExecScanExtended(), defined in src/include/executor/execScan.h.
ExecScanExtended() accepts parameters for EvalPlanQual state,
qualifiers (ExprState), and projection (ProjectionInfo).

Specialized variants of the execution function of a given Scan node
(for example, ExecSeqScan() for SeqScan) can then pass const-NULL for
unused parameters.  This allows the compiler to inline the logic and
eliminate unnecessary branches or checks.  Each variant function thus
contains only the necessary code, optimizing execution for scans
where these features are not needed.

The variant function to be used is determined in the ExecInit*()
function of the node and assigned to the ExecProcNode function pointer
in the node's PlanState, effectively turning runtime checks and
conditional branches on the NULLness of epqstate, qual, and projInfo
into static ones, provided the compiler successfully eliminates
unnecessary checks from the inlined code of ExecScanExtended().

Currently, only ExecSeqScan() is modified to take advantage of this
inline-ability.  Other Scan nodes might benefit from such specialized
variant functions but that is left as future work.

Benchmarks performed by Junwang Zhao, David Rowley and myself show up
to a 5% reduction in execution time for queries that rely heavily on
Seq Scans. The most significant improvements were observed in
scenarios where EvalPlanQual, qualifiers, and projection were not
required, but other cases also benefit from reduced runtime overhead
due to the inlining and removal of unnecessary code paths.

The idea for this patch first came from Andres Freund in an off-list
discussion. The refactoring approach implemented here is based on a
proposal by David Rowley, significantly improving upon the patch I
(amitlan) initially proposed.

Suggested-by: Andres Freund <[email protected]>
Co-authored-by: David Rowley <[email protected]>
Reviewed-by: David Rowley <[email protected]>
Reviewed-by: Junwang Zhao <[email protected]>
Tested-by: Junwang Zhao <[email protected]>
Tested-by: David Rowley <[email protected]>
Discussion: https://postgr.es/m/CA+HiwqGaH-otvqW_ce-paL=96JvU4j+Xbuk+14esJNDwefdkOg@mail.gmail.com
The test table names gtest11s and gtest12s were way originally chosen
to signify "stored", when the idea was to have virtual columns in the
same test file.  This is no longer the idea, so this naming is
irrelevant.  (The upcoming feature of virtual generated columns will
have a test file that is initially a copy of generated_stored.sql, and
this random difference will be even more annoying then.)  Clean this
up by dropping the suffix.

Discussion: https://www.postgresql.org/message-id/flat/[email protected]
Make some indentation better and more consistent.  Extracted from
another patch with some actual test changes.

Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
If a referenced UPDATE changes the temporal start/end times, shrinking
the span the row is valid, we get a false return from
ri_Check_Pk_Match(), but overlapping references may still be valid, if
their reference didn't overlap with the removed span.

We need to consider what span(s) are still provided in the referenced
table.  Instead of returning that from ri_Check_Pk_Match(), we can
just look it up in the main SQL query.

Reported-by: Sam Gabrielsson <[email protected]>
Author: Paul Jungwirth <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
In common cases, foreign keys are defined on the toplevel partitioned
table; but if instead one is defined on a partition and references a
partitioned table, and the referencing partition is detached, we would
examine the pg_constraint row on the partition being detached, and fail
to realize that the sub-constraints must be left alone.  This causes the
ALTER TABLE DETACH process to fail with

 ERROR:  could not find ON INSERT check triggers of foreign key constraint NNN

This is similar but not quite the same as what was fixed by
53af949.  This bug doesn't affect branches earlier than 15, because
the detach procedure was different there, so we only backpatch down to
15.

Fix by skipping such modifying constraints that are children of other
constraints being detached.

Author: Amul Sul <[email protected]>
Diagnosys-by: Sami Imseih <[email protected]>
Discussion: https://postgr.es/m/CAAJ_b97GuPh6wQPbxQS-Zpy16Oh+0aMv-w64QcGrLhCOZZ6p+g@mail.gmail.com
Most were introduced in the 17 timeframe.  The ones in wparser_def.c are
very old.

I also changed "JSON path expression for column \"%s\" should return
single item without wrapper" to "JSON path expression for column \"%s\"
must return single item when no wrapper is requested" to avoid
ambiguity.

Backpatch to 17.

Crickets: https://postgr.es/m/[email protected]
For the purposes of this discussion, row_number() is just as good
as rank(), and its behavior is easier to understand and describe.
So let's switch the examples to using row_number().

Along the way to checking the results given in the tutorial,
I found it helpful to extract the empsalary table we use in the
regression tests, which is evidently the same data that was used
to make these results.  So I shoved that into advanced.source
to improve the coverage of that file a little.  (There's still
several pages of the tutorial that are not included in it,
but at least now 3.5 Window Functions is covered.)

Suggested-by: "David G. Johnston" <[email protected]>
Author: Tom Lane <[email protected]>
Discussion: https://postgr.es/m/[email protected]
A follow-up patch will adjust the TAP tests to follow a more-structured
format for option lists in commands, that perltidy is able to cope
better with.  Putting the tree first in a clean state makes the next
change a bit easier.  v20230309 has been used.

Author: Dagfinn Ilmari Mannsåker
Discussion: https://postgr.es/m/[email protected]
…rval.

In passing, update the documentation that explains the process of initial
data replication to explicitly state that it uses a table synchronization
worker.

Author: Vignesh C
Reviewed-by: Peter Smith, Shlok Kyal, Amit Kapila
Discussion: https://postgr.es/m/CALDaNm3RxGcD4cDAV5Q0_A4n06F3+AAMpxiyND9Zn0dB86hFmg@mail.gmail.com
This commit rewrites a good chunk of the command arrays in TAP tests
with a grammar based on the following rules:
- Fat commas are used between option names and their values, making it
clear to both humans and perltidy that values and names are bound
together.  This is particularly useful for the readability of multi-line
command arrays, and there are plenty of them in the TAP tests.  Most of
the test code is updated to use this style.  Some commands used
parenthesis to show the link, or attached values and options in a single
string.  These are updated to use fat commas instead.
- Option names are switched to use their long names, making them more
self-documented.  Based on a suggestion by Andrew Dunstan.
- Add some trailing commas after the last item in multi-line arrays,
which is a common perl style.

Not all the places are taken care of, but this covers a very good chunk
of them.

Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Michael Paquier, Peter Smith, Euler Taveira
Discussion: https://postgr.es/m/[email protected]
Some additional tests have been created during the development of
virtual generated columns (not included here).  This commit adds
equivalent tests to the existing test set for stored generated
columns.  This includes expanded tests related to MERGE, subqueries,
whole-row references, permissions, domains, partitioning, and
triggers.

Author: Peter Eisentraut <[email protected]>
Co-authored-by: jian he <[email protected]>
Co-authored-by: Dean Rasheed <[email protected]>
Discussion: https://www.postgresql.org/message-id/flat/[email protected]
…sion.

The psql was not careful that the new column "Generated columns" won't be
present in the lower version. This was introduced in recent commit
7054186.

Author: Vignesh C
Reviewed-by: Peter Smith
Discussion: https://postgr.es/m/CALDaNm3OcXdY0EzDEKAfaK9gq2B67Mfsgxu93+_249ohyts=0g@mail.gmail.com
This patch fixes two distinct errors that both ultimately trace
to commit 71d60e2, which added the ats_modifiedcols field.

The more severe error is that ats_modifiedcols wasn't accounted for
in afterTriggerAddEvent's scanning loop that looks for a pre-existing
duplicate AfterTriggerSharedData.  Thus, a new event could be
incorrectly matched to an AfterTriggerSharedData that has a different
value of ats_modifiedcols, resulting in the wrong tg_updatedcols
bitmap getting passed to the trigger whenever it finally gets fired.
We'd not noticed because (a) few triggers consult tg_updatedcols,
and (b) we had no tests exercising a case where such a trigger was
called as an AFTER trigger.  In the test case added by this commit,
contrib/lo's trigger fails to remove a large object when expected
because (without this fix) it thinks the LO OID column hasn't changed.

The other problem was introduced by commit ce5aaea, which copied the
modified-columns bitmap into trigger-related storage.  It made a copy
for every trigger event, whereas what we really want is to make a new
copy only when we make a new AfterTriggerSharedData entry.  (We could
imagine adding extra logic to reduce the number of bitmap copies still
more, but it doesn't look worthwhile at the moment.)  In a simple test
of an UPDATE of 10000000 rows with a single AFTER trigger, this thinko
roughly tripled the amount of memory consumed by the pending-triggers
data structures, from 160446744 to 480443440 bytes.

Fixing the first problem requires introducing a bms_equal() call into
afterTriggerAddEvent's scanning loop, which is slightly annoying from
a speed perspective.  However, getting rid of the excessive bms_copy()
calls from the second problem balances that out; overall speed of
trigger operations is the same or slightly better, in my tests.

Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 13
This can be useful either for jumbling expressions in other contexts
(e.g. to calculate a plan jumble), or to allow extensions to use
a modified jumbling logic more easily.

This intentionally supports the use case where a separate jumbling logic
does not care about recording constants, as the query jumble does.
This allows users of the cumulative statistics systems to drop all
entries for a given kind, similar to how pgstat_reset_entries_of_kind
allows resetting all entries for a given statistics kind.

Add an example use of pgstat_drop_entries_of_kind in the injection
points module by adding a injection_points_stats_reset function.
When enabled via the new compute_plan_id GUC (default off), this utilizes
the existing treewalk in setrefs.c after planning to calculate a hash
(the "plan_id", or plan identifier) that can be used to identify
which plan was chosen.

The plan_id generally intends to be the same if a given EXPLAIN (without
ANALYZE) output is the same. The plan_id includes both the top-level plan
as well as all subplans. Execution statistics are excluded.

If enabled, the plan_id is shown for currently running queries in
pg_stat_activity, as well as recorded in EXPLAIN and auto_explain output.

Other in core users or extensions can use this facility to show or
accumulate statistics about the plans used by queries, to help identify
plan regressions, or drive plan management decisions.

Note that this commit intentionally does not include a facility to map
a given plan_id to the EXPLAIN text output - it is a assumed that users
can utilize the auto_explain extension to establish this mapping as
needed, or extensions can record this via the existing planner hook.
This extension allows tracking per-plan call counts and execution time,
as well as capturing the plan text, aka EXPLAIN (COSTS OFF), for the
first execution of a given plan. This utilize the compute_plan_id
functionality for tracking different plans.
@lfittl lfittl force-pushed the lfittl-planid-tracking branch from 5c3e3ce to 7e17421 Compare January 23, 2025 21:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.