Skip to content
This repository has been archived by the owner on Mar 6, 2023. It is now read-only.

Merge current master #44

Open
wants to merge 1,088 commits into
base: cmake_rel11
Choose a base branch
from
Open

Merge current master #44

wants to merge 1,088 commits into from

Conversation

ildus
Copy link

@ildus ildus commented Mar 1, 2018

No description provided.

adunstan and others added 30 commits January 16, 2018 19:07
The creates a single function JsonEncodeDateTime which will format these
data types in an efficient and consistent manner. This will be all the
more important when we come to jsonpath so we don't have to implement yet
more code doing the same thing in two more places.

This also extends the code to handle time and timetz types which were
not previously handled specially. This requires exposing the time2tm and
timetz2tm functions.

Patch from Nikita Glukhov
Ability to advance both physical and logical replication slots using a
new user function pg_replication_slot_advance().

For logical advance that means records are consumed as fast as possible
and changes are not given to output plugin for sending. Makes 2nd phase
(after we reached SNAPBUILD_FULL_SNAPSHOT) of replication slot creation
faster, especially when there are big transactions as the reorder buffer
does not have to deal with data changes and does not have to spill to
disk.

Author: Petr Jelinek
Reviewed-by: Simon Riggs
node->partitioned_rels is only set in UPDATE/DELETE cases, but
ExecInitModifyTable only uses its "rel" variable in INSERT cases,
so the extra logic to find the root rel is just a waste of complexity
and cycles.

Etsuro Fujita, reviewed by Amit Langote

Discussion: https://postgr.es/m/[email protected]
When pushing down a join to a foreign server, postgres_fdw constructs
an alternative plan to be used for any EvalPlanQual rechecks that
prove to be necessary.  This plan is stored as the outer subplan of
the Foreign Scan implementing the pushed-down join.  Previously, this
alternative plan could have a different nominal sort ordering than its
parent, which seemed OK since there will only be one tuple per base
table anyway in the case of an EvalPlanQual recheck.  Actually,
though, it caused a problem if that path was used as a building block
for the EvalPlanQual recheck plan of a higher-level foreign join,
because we could end up with a merge join one of whose inputs was not
labelled with the correct sort order.  Repair by injecting an extra
Sort node into the EvalPlanQual recheck plan whenever it would
otherwise fail to be sorted at least as well as its parent Foreign
Scan.

Report by Jeff Janes.  Patch by me, reviewed by Tom Lane, who also
provided the test case and comment text.

Discussion: http://postgr.es/m/CAMkU=1y2G8VOVBHv3iXU2TMAj7-RyBFFW1uhkr5sm9LQ2=X35g@mail.gmail.com
Reorder header files in joinrels.c and pathnode.c in alphabetical order,
removing unnecessary ones.

Author: Etsuro Fujita
On Sparc64, use of __attribute__(aligned(8)) with __int128 causes faulty
code generation in gcc versions at least through 5.5.0.  We can work around
that by disabling use of __int128, so teach configure to test for the bug.

This solution doesn't fix things for the case of cross-compiling with a
buggy compiler; to support that nicely, we'd need to add a manual disable
switch.  Unless more such cases turn up, it doesn't seem worth the work.
Affected users could always edit pg_config.h manually.

In passing, fix some typos in the existing configure test for __int128.
They're harmless because we only compile that code not run it, but
they're still confusing for anyone looking at it closely.

This is needed in support of commit 7518049, so back-patch to 9.5
as that was.

Marina Polyakova, Victor Wagner, Tom Lane

Discussion: https://postgr.es/m/[email protected]
This was hardly tested at all.  The trigger case was lightly tested by
the logical replication tests, but rules and event triggers were not
tested at all.
The "callback" that this comment was referring to was removed by commit
c0a15e0, so update to match the current
code.
Typo in 9c7d06d

Reported-by: Masahiko Sawada
This will allow the pending patch for parallel CREATE INDEX to work
on system catalogs, and to provide the same level of protection
against use of user indexes while they are being rebuilt that we
have for non-parallel CREATE INDEX.

Patch by me, reviewed by Peter Geoghegan.

Discussion: http://postgr.es/m/CA+TgmoYN-YQU9JsGQcqFLovZ-C+Xgp1_xhJQad=cunGG-_p5gg@mail.gmail.com
Discussion: http://postgr.es/m/CAH2-Wzkv4UNkXYhqQRqk-u9rS7h5c-4cCW+EqQ8K_WSeS43aZg@mail.gmail.com
For no apparent reason, this function was using a 16bit-wide inhseqno
value, rather than the correct 32 bit width which is what is stored in
the pg_inherits catalog.  This becomes evident if you try to create a
table with more than 65535 parents, because this error appears:

ERROR:  duplicate key value violates unique constraint «pg_inherits_relid_seqno_index»
DETAIL:  Key (inhrelid, inhseqno)=(329371, 0) already exists.

Needless to say, having so many parents is an uncommon situations, which
explains why this error has never been reported despite being having
been introduced with the Postgres95 1.01 sources in commit d31084e:
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/commands/creatinh.c;hb=d31084e9d111#l349

Backpatch all the way back.

David Rowley noticed this while reviewing a patch of mine.
Discussion: https://postgr.es/m/CAKJS1f8Dn7swSEhOWwzZzssW7747YB=2Hi+T7uGud40dur69-g@mail.gmail.com
When CREATE INDEX is run on a partitioned table, create catalog entries
for an index on the partitioned table (which is just a placeholder since
the table proper has no data of its own), and recurse to create actual
indexes on the existing partitions; create them in future partitions
also.

As a convenience gadget, if the new index definition matches some
existing index in partitions, these are picked up and used instead of
creating new ones.  Whichever way these indexes come about, they become
attached to the index on the parent table and are dropped alongside it,
and cannot be dropped on isolation unless they are detached first.

To support pg_dump'ing these indexes, add commands
    CREATE INDEX ON ONLY <table>
(which creates the index on the parent partitioned table, without
recursing) and
    ALTER INDEX ATTACH PARTITION
(which is used after the indexes have been created individually on each
partition, to attach them to the parent index).  These reconstruct prior
database state exactly.

Reviewed-by: (in alphabetical order) Peter Eisentraut, Robert Haas, Amit
	Langote, Jesper Pedersen, Simon Riggs, David Rowley
Discussion: https://postgr.es/m/[email protected]
I missed a '0' in the version number string ...

Per buildfarm member crake.
There used to be a lot of different *Type and *Kind symbol groups to
address objects within different commands, most of which have been
replaced by ObjectType, starting with
b256f24.  But this conversion was never
done for the ACL commands until now.

This change ends up being just a plain replacement of the types and
symbols, without any code restructuring needed, except deleting some now
redundant code.

Reviewed-by: Michael Paquier <[email protected]>
Reviewed-by: Stephen Frost <[email protected]>
AclObjectKind was basically just another enumeration for object types,
and we already have a preferred one for that.  It's only used in
aclcheck_error.  By using ObjectType instead, we can also give some more
precise error messages, for example "index" instead of "relation".

Reviewed-by: Michael Paquier <[email protected]>
When an index column is an expression, it makes no sense to compare its
attribute numbers.

This seems to account for remaining buildfarm fallout from 8b08f7d.
At least, it solves the issue in my local 32bit VM -- let's see what the
rest thinks.
When an UPDATE causes a row to no longer match the partition
constraint, try to move it to a different partition where it does
match the partition constraint.  In essence, the UPDATE is split into
a DELETE from the old partition and an INSERT into the new one.  This
can lead to surprising behavior in concurrency scenarios because
EvalPlanQual rechecks won't work as they normally did; the known
problems are documented.  (There is a pending patch to improve the
situation further, but it needs more review.)

Amit Khandekar, reviewed and tested by Amit Langote, David Rowley,
Rajkumar Raghuwanshi, Dilip Kumar, Amul Sul, Thomas Munro, Álvaro
Herrera, Amit Kapila, and me.  A few final revisions by me.

Discussion: http://postgr.es/m/CAJ3gD9do9o2ccQ7j7+tSgiE1REY65XRiMb=yJO3u3QhyP8EEPQ@mail.gmail.com
We don't actually need two code paths, one for 32 bits and one for 64
bits.  Since the existing code already assumed that "long long" is
available, we can just use PyLong_FromLongLong() for 64 bits as well.
In Python 2.5 and later, PyLong_FromLong() and PyLong_FromLongLong() use
the same code, so there will be no difference for 64-bit platforms.  In
Python 2.4, the code is different, but performance testing showed no
noticeable difference in PL/Python, and that Python version is ancient
anyway.

Discussion: https://www.postgresql.org/message-id/[email protected]
Apparently, Peter's compiler has faith that the switch test values here
could never not be valid values of their enums.  Mine does not, and
I tend to agree with it.
The previous code converted SPI_processed to a Python float if it didn't
fit into a Python int.  But Python longs have unlimited precision, so
use that instead in all cases.

As in eee50a8, we use the Python
LongLong API unconditionally for simplicity.

Reviewed-by: Tom Lane <[email protected]>
Document how to properly create root and intermediate certificates using
v3_ca extensions and where to place intermediate certificates so they
are properly transferred to the remote side with the leaf certificate to
link to the remote root certificate.  This corrects docs that used to
say that intermediate certificates must be stored with the root
certificate.

Also add instructions on how to create root, intermediate, and leaf
certificates.

Discussion: https://postgr.es/m/[email protected]

Reviewed-by: Michael Paquier

Backpatch-through: 9.3
The field is still called "hostaddr", so make sure references use
"hostaddr values" instead.

Author: Michael Paquier <[email protected]>
Add support for huge pages (called large pages on Windows) to the
Windows build.

This (probably) breaks compatibility with Windows versions prior to
Windows 2003 or Windows Vista.

Authors: Takayuki Tsunakawa and Thomas Munro
Reviewed by: Magnus Hagander, Amit Kapila
Spotted by Thomas Munro
In each of the supplied procedural languages (PL/pgSQL, PL/Perl,
PL/Python, PL/Tcl), add language-specific commit and rollback
functions/commands to control transactions in procedures in that
language.  Add similar underlying functions to SPI.  Some additional
cleanup so that transaction commit or abort doesn't blow away data
structures still used by the procedure call.  Add execution context
tracking to CALL and DO statements so that transaction control commands
can only be issued in top-level procedure and block calls, not function
calls or other procedure or block calls.

- SPI

Add a new function SPI_connect_ext() that is like SPI_connect() but
allows passing option flags.  The only option flag right now is
SPI_OPT_NONATOMIC.  A nonatomic SPI connection can execute transaction
control commands, otherwise it's not allowed.  This is meant to be
passed down from CALL and DO statements which themselves know in which
context they are called.  A nonatomic SPI connection uses different
memory management.  A normal SPI connection allocates its memory in
TopTransactionContext.  For nonatomic connections we use PortalContext
instead.  As the comment in SPI_connect_ext() (previously SPI_connect())
indicates, one could potentially use PortalContext in all cases, but it
seems safest to leave the existing uses alone, because this stuff is
complicated enough already.

SPI also gets new functions SPI_start_transaction(), SPI_commit(), and
SPI_rollback(), which can be used by PLs to implement their transaction
control logic.

- portalmem.c

Some adjustments were made in the code that cleans up portals at
transaction abort.  The portal code could already handle a command
*committing* a transaction and continuing (e.g., VACUUM), but it was not
quite prepared for a command *aborting* a transaction and continuing.

In AtAbort_Portals(), remove the code that marks an active portal as
failed.  As the comment there already predicted, this doesn't work if
the running command wants to keep running after transaction abort.  And
it's actually not necessary, because pquery.c is careful to run all
portal code in a PG_TRY block and explicitly runs MarkPortalFailed() if
there is an exception.  So the code in AtAbort_Portals() is never used
anyway.

In AtAbort_Portals() and AtCleanup_Portals(), we need to be careful not
to clean up active portals too much.  This mirrors similar code in
PreCommit_Portals().

- PL/Perl

Gets new functions spi_commit() and spi_rollback()

- PL/pgSQL

Gets new commands COMMIT and ROLLBACK.

Update the PL/SQL porting example in the documentation to reflect that
transactions are now possible in procedures.

- PL/Python

Gets new functions plpy.commit and plpy.rollback.

- PL/Tcl

Gets new commands commit and rollback.

Reviewed-by: Andrew Dunstan <[email protected]>
…ble.

_tocEntryRequired() expects that it can identify ACL, SECURITY LABEL,
and COMMENT TOC entries that are for large objects by seeing whether
the tag for them starts with "LARGE OBJECT ".  While that works fine
for actual large objects, which are indeed tagged that way, it's
subject to false positives unless every such entry's tag starts with an
appropriate type ID.  And in fact it does not work for ACLs, because
up to now we customarily tagged those entries with just the bare name
of the object.  This means that an ACL for an object named
"LARGE OBJECT something" would be misclassified as data not schema,
with undesirable results in a schema-only or data-only dump ---
although pg_upgrade seems unaffected, due to the special case for
binary-upgrade mode further down in _tocEntryRequired().

We can fix this by changing all the dumpACL calls to use the label
strings already in use for comments and security labels, which do
follow the convention of starting with an object type indicator.

Well, mostly they follow it.  dumpDatabase() got it wrong, using
just the bare database name for those purposes, so that a database
named "LARGE OBJECT something" would similarly be subject to having
its comment or security label dropped or included when not wanted.
Bring that into line too.  (Note that up to now, database ACLs have
not been processed by pg_dump, so that this issue doesn't affect them.)

_tocEntryRequired() itself is not free of fault: it was overly liberal
about matching object tags to "LARGE OBJECT " in binary-upgrade mode.
This looks like it is probably harmless because there would be no data
component to strip anyway in that mode, but at best it's trouble
waiting to happen, so tighten that up too.

The possible misclassification of SECURITY LABEL entries for databases is
in principle a security problem, but the opportunities for actual exploits
seem too narrow to be interesting.  The other cases seem like just bugs,
since an object owner can change its ACL or comment for himself, he needn't
try to trick someone else into doing it by choosing a strange name.

This has been broken since per-large-object TOC entries were introduced
in 9.0, so back-patch to all supported branches.

Discussion: https://postgr.es/m/[email protected]
tglsfdc and others added 12 commits February 27, 2018 13:28
A before-update row trigger may choose to return the "new" or "old" tuple
unmodified.  ExecBRUpdateTriggers failed to consider the second
possibility, and would proceed to free the "old" tuple even if it was the
one returned, leading to subsequent access to already-deallocated memory.
In debug builds this reliably leads to an "invalid memory alloc request
size" failure; in production builds it might accidentally work, but data
corruption is also possible.

This is a very old bug.  There are probably a couple of reasons it hasn't
been noticed up to now.  It would be more usual to return NULL if one
wanted to suppress the update action; returning "old" is significantly less
efficient since the update will occur anyway.  Also, none of the standard
PLs would ever cause this because they all returned freshly-manufactured
tuples even if they were just copying "old".  But commit 4b93f57 changed
that for plpgsql, making it possible to see the bug with a plpgsql trigger.
Still, this is certainly legal behavior for a trigger function, so it's
ExecBRUpdateTriggers's fault not plpgsql's.

It seems worth creating a test case that exercises returning "old" directly
with a C-language trigger; testing this through plpgsql seems unreliable
because its behavior might change again.

Report and fix by Rushabh Lathia; regression test case by me.
Back-patch to all supported branches.

Discussion: https://postgr.es/m/CAGPqQf1P4pjiNPrMof=P_16E-DFjt457j+nH2ex3=nBTew7tXw@mail.gmail.com
This seemed like a good idea in commit be42eb9, but it causes more
trouble than it's worth for cross-branch upgrade testing.

Discussion: https://postgr.es/m/[email protected]
Tom Kazimiers reported that transition tables don't work correctly when
they are scanned by more than one executor node.  That's because commit
18ce3a4 allocated separate read pointers for each executor node, as it
must, but failed to make them active at the appropriate times.  Repair.

Thomas Munro

Discussion: https://postgr.es/m/20180224034748.bixarv6632vbxgeb%40dewberry.localdomain
…lds.

Although configure-based builds correctly define HAVE_LONG_LONG_INT when
appropriate (in both pg_config.h and ecpg_config.h), builds using the MSVC
scripts failed to do so.  This currently has no impact on the backend,
since it uses that symbol nowhere; but it does prevent ecpg from
supporting "long long int".  Fix that.

Also, adjust Solution.pm so that in the constructed ecpg_config.h file,
the "#if (_MSC_VER > 1200)" covers only the LONG_LONG_INT-related
#defines, not the whole file.  AFAICS this was a thinko on somebody's
part: ENABLE_THREAD_SAFETY should always be defined in Windows builds,
and in branches using USE_INTEGER_DATETIMES, the setting of that shouldn't
depend on the compiler version either.  If I'm wrong, I imagine the
buildfarm will say so.

Per bug #15080 from Jonathan Allen; issue diagnosed by Michael Meskes
and Andrew Gierth.  Back-patch to all supported branches.

Discussion: https://postgr.es/m/[email protected]
The changes in the CREATE POLICY man page from commit
87c2a17 triggered a stylesheet bug that
created some warning messages and incorrect output.  This installs a
workaround.

Also improve the whitespace a bit so it looks better.
Turn off man.endnotes.are.numbered parameter, which we don't need, but
which increases performance vastly if off.  Also turn on
man.output.quietly, which also makes things a bit faster, but which is
also less useful now as a progress indicator because the build is so
fast now.
Commits 6f6b99d and
f3b0897 didn't properly update
these comments.

Etsuro Fujita, reviewed by Amit Langote

Discussion: http://postgr.es/m/[email protected]
Commit 1bc0100 added this test,
and commit 882ea50 tried to
stabilize it.  There were still failures, so commit
958e20e tried again to stabilize
it.  That approach is still failing on jaguarundi, though, so
back it out and try something else.  Specifically, instead of
disabling remote estimates for the table in question, let's tell
autovacuum to leave it alone.

Etsuro Fujita

Discussion: http://postgr.es/m/[email protected]
Parallel-aware plan nodes must be prepared to run without parallelism
if it's not possible at execution time for whatever reason.  Commit
ab72716, which introduced Parallel
Append, overlooked this.

Rajkumar Raghuwanshi reported this problem, and I included his test
case in this patch.  The code changes are by me.

Discussion: http://postgr.es/m/CAKcux6=WqkUudLg1GLZZ7fc5ScWC1+Y9qD=pAHeqy32WoeJQvw@mail.gmail.com
@stalkerg
Copy link
Owner

stalkerg commented Mar 2, 2018

Looks like you made not enough fixes, check Travis. For example pq_verify_peer_name_matches_certificate .

@ildus
Copy link
Author

ildus commented Mar 2, 2018

@stalkerg yeah, but I can only make fixes for linux, could you look at windows side?

@stalkerg
Copy link
Owner

stalkerg commented Mar 4, 2018

could you look at windows side?

sure, but currently exist Linux bugs also.

@stalkerg
Copy link
Owner

stalkerg commented Mar 6, 2018

Looks like much better but still have problems with ecpg tests, I think some files miss.

@stalkerg stalkerg changed the base branch from cmake to cmake_rel11 August 5, 2018 23:46
@stalkerg
Copy link
Owner

stalkerg commented Aug 5, 2018

I changed merge target to special cmake_rel11 branch to work on it independently.

@SamuelMarks
Copy link

Any news on this merge?

SamuelMarks added a commit to SamuelMarks/postgres_cmake that referenced this pull request Nov 11, 2018
…olve `HAVE_INT128` macro redefinition warning
@stalkerg
Copy link
Owner

Not really but I want to start work on it, very very soon.
At least we should find the best strategy for migration to 11.

@SamuelMarks
Copy link

Alright, well in the meantime I'm hacking something together here:
#46

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.