This adds extern declarations for some global variables produced by
Bison that are not already declared in its generated header file.
This is a workaround to be able to add -Wmissing-variable-declarations
to the global set of warning options in the near future.
Another longer-term solution would be to convert these grammars to
"pure" parsers in Bison, to avoid global variables altogether. Note
that the core grammar is already pure, so this patch did not need to
touch it.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
"path" provides a reliable method of determining the parent/child
relationships between memory contexts. Previously this could be done in
a non-reliable way by writing a recursive query and joining the "parent"
and "name" columns. This wasn't reliable as the names were not unique,
which could result in joining to the wrong parent.
To make this reliable, "path" stores an array of numerical identifiers
starting with the identifier for TopLevelMemoryContext. It contains an
element for each intermediate parent between that and the current context.
Incompatibility: Here we also adjust the "level" column to make it
1-based rather than 0-based. A 1-based level provides a convenient way
to access elements in the "path" array. e.g. path[level] gives the
identifier for the current context.
Identifiers are not stable across multiple evaluations of the view. In
an attempt to make these more stable for ad-hoc queries, the identifiers
are assigned breadth-first. Contexts closer to TopLevelMemoryContext
are less likely to change between queries and during queries.
Author: Melih Mutlu <m.melihmutlu@gmail.com>
Discussion: https://postgr.es/m/CAGPVpCThLyOsj3e_gYEvLoHkr5w=tadDiN_=z2OwsK3VJppeBA@mail.gmail.com
Reviewed-by: Andres Freund, Stephen Frost, Atsushi Torikoshi,
Reviewed-by: Michael Paquier, Robert Haas, David Rowley
Commit d01ce180 invented a new way to find the latest MacPorts version.
By bad luck, a new beta release has just been published, and it seems
to lack some packages we need. Go back to searching for this specific
version for now. We still search with a pattern so that we can find the
package for the running version of macOS, but for now we always look for
2.9.3. The code to do that had been anticipated already in a commented
out line, I just didn't expect to have to use it so soon...
Also include the whole MacPorts installation script in the cache key, so
that changes to the script cause a fresh installation. This should make
it a bit easier to reason about the effect of changes on cached state in
github accounts using CI, when we make adjustments.
Back-patch to 15, like d01ce180.
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
1. Previously we were using ghcr.io/cirruslabs/macos-XXX-base:latest
images, but Cirrus has started ignoring that and using a particular
image, currently ghcr.io/cirruslabs/macos-runner:sonoma, for github
accounts using free CI resources (as opposed to dedicated runner
machines, as cfbot uses). Let's just ask for that image anyway, to stay
in sync.
2. Instead of hard-coding a MacPorts installation URL, deduce it from
the running macOS version and the available releases. This removes the
need to keep the ci_macports_packages.sh in sync with .cirrus.task.yml,
and to advance the MacPorts version from time to time.
3. Change the cache key we use to cache the whole macports installation
across builds to include the OS major version, to trigger a fresh
installation when appropriate.
Back-patch to 15 where CI began.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/CA%2BhUKGLqJdv6RcwyZ_0H7khxtLTNJyuK%2BvDFzv3uwYbn8hKH6A%40mail.gmail.com
Presently, pg_upgrade obtains the number of subscriptions in the
to-be-upgraded cluster by first querying pg_subscription in every
database for the number of subscriptions in only that database.
Then, in count_old_cluster_subscriptions(), it adds all the values
collected in the first step. This is expensive, especially when
there are many databases.
Fortunately, there is a better way to retrieve the subscription
count. Since pg_subscription is a shared catalog, we only need to
connect to a single database and query it once. This commit
modifies pg_upgrade to use that approach, which also allows us to
trim several lines of code. In passing, move the call to
get_db_subscription_count(), which has been renamed to
get_subscription_count(), from get_db_rel_and_slot_infos() to the
dedicated >= v17 section in check_and_dump_old_cluster().
We may be able to make similar improvements to
get_old_cluster_logical_slot_infos(), but that is left as a future
exercise.
Reviewed-by: Michael Paquier, Amit Kapila
Discussion: https://postgr.es/m/ZprQJv_TxccN3tkr%40nathan
Backpatch-through: 17
This commit adds a regression test to verify that pg_stat_statements
correctly handles privileges, improving its test coverage.
Author: Keisuke Kuroda
Reviewed-by: Michael Paquier, Fujii Masao
Discussion: https://postgr.es/m/2224ccf2e12c41ccb81702ef3303d5ac@nttcom.co.jp
We don't allow inheritance parents as partitions, and have checks to
prevent this; but if a table _was_ in the past an inheritance parents
and all their children are removed, the pg_class.relhassubclass flag
may remain set, which confuses the partition pruning code (most
obviously, it results in an assertion failure; in production builds it
may be worse.)
Fix by resetting relhassubclass on attach.
Backpatch to all supported versions.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/18550-d5e047e9a897a889@postgresql.org
Previously, TidStoreIterateNext() would expand the set of offsets for
each block into an internal buffer that it overwrote each time. In
order to be able to collect the offsets for multiple blocks before
working with them, change the contract. Now, the offsets are obtained
by a separate call to TidStoreGetBlockOffsets(), which can be called at
a later time. TidStoreIteratorResult objects are safe to copy and store
in a queue.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/CAAKRu_bbkmwAzSBgnezancgJeXrQZXy4G4kBTd+5=cr86H5yew@mail.gmail.com
The two_phase option is controlled by both the publisher (as a slot
option) and the subscriber (as a subscription option), so the slot option
must also be modified.
Changing the 'two_phase' option for a subscription from 'true' to 'false'
is permitted only when there are no pending prepared transactions
corresponding to that subscription. Otherwise, the changes of already
prepared transactions can be replicated again along with their corresponding
commit leading to duplicate data or errors.
To avoid data loss, the 'two_phase' option for a subscription can only be
changed from 'false' to 'true' once the initial data synchronization is
completed. Therefore this is performed later by the logical replication worker.
Author: Hayato Kuroda, Ajin Cherian, Amit Kapila
Reviewed-by: Peter Smith, Hou Zhijie, Amit Kapila, Vitaly Davydov, Vignesh C
Discussion: https://postgr.es/m/8fab8-65d74c80-1-2f28e880@39088166
Add extern declarations in appropriate header files for global
variables related to GUC. In many cases, this was handled quite
inconsistently before, with some GUC variables declared in a header
file and some only pulled in via ad-hoc extern declarations in various
.c files.
Also add PGDLLIMPORT qualifications to those variables. These were
previously missing because src/tools/mark_pgdllimport.pl has only been
used with header files.
This also fixes -Wmissing-variable-declarations warnings for GUC
variables (not yet part of the standard warning options).
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
When provided an empty initial array, array_set_slice() fails to
check for overflow when computing the new array's dimensions.
While such overflows are ordinarily caught by ArrayGetNItems(),
commands with the following form are accepted:
INSERT INTO t (i[-2147483648:2147483647]) VALUES ('{}');
To fix, perform the hazardous computations using overflow-detecting
arithmetic routines. As with commit 18b585155a, the added test
cases generate errors that include a platform-dependent value, so
we again use psql's VERBOSITY parameter to suppress printing the
message text.
Reported-by: Alexander Lakhin
Author: Joseph Koshakow
Reviewed-by: Jian He
Discussion: https://postgr.es/m/31ad2cd1-db94-bdb3-f91a-65ffdb4bef95%40gmail.com
Backpatch-through: 12
This fixes warnings from -Wmissing-variable-declarations (not yet part
of the standard warning options) under EXEC_BACKEND. The
NON_EXEC_STATIC variables need a suitable declaration in a header file
under EXEC_BACKEND.
Also fix the inconsistent application of the volatile qualifier for
PMSignalState, which was revealed by this change.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://www.postgresql.org/message-id/flat/e0a62134-83da-4ba4-8cdb-ceb0111c95ce@eisentraut.org
clog.c, async.c and predicate.c included some SLRU page numbers still
handled as 4-byte integers, while int64 should be used for this purpose.
These holes have been introduced in 4ed8f0913bfd, that has introduced
the use of 8-byte integers for SLRU page numbers, still forgot about the
code paths updated by this commit.
Reported-by: Noah Misch
Author: Aleksander Alekseev, Michael Paquier
Discussion: https://postgr.es/m/20240626002747.dc.nmisch@google.com
Backpatch-through: 17
The docs currently imply that ldapurl is for search+bind only, but
that's not true. Rearrange the docs to cover this better.
Add a test ldapurl with simple bind. This was previously allowed but
unexercised, and now that it's documented it'd be good to pin the
behavior.
Improve error when mixing LDAP bind modes. The option names had gone
stale; replace them with a more general statement.
Author: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAOYmi+nyg9gE0LeP=xQ3AgyQGR=5ZZMkVVbWd0uR8XQmg_dd5Q@mail.gmail.com
slru.h described incorrectly how SLRU segment names are formatted
depending on the segment number and if long or short segment names are
used. This commit closes the gap with a better description, fitting
with the reality.
Reported-by: Noah Misch
Author: Aleksander Alekseev
Discussion: https://postgr.es/m/20240626002747.dc.nmisch@google.com
Backpatch-through: 17
In create_gather_merge_path, we should always guarantee that the
subpath is adequately ordered, and we do not add a Sort node in
createplan.c for a Gather Merge node. Therefore, the 'else' branch in
create_gather_merge_path, which computes the cost for a Sort node, is
redundant.
This patch removes the redundant code and emits an error if the
subpath is not sufficiently ordered. Meanwhile, this patch changes
the check for the subpath's pathkeys in create_gather_merge_plan to an
Assert.
Author: Richard Guo
Discussion: https://postgr.es/m/CAMbWs48u=0bWf3epVtULjJ-=M9Hbkz+ieZQAOS=BfbXZFqbDCg@mail.gmail.com
In the case of a parallel plan, when computing the number of tuples
processed per worker, we divide the total number of tuples by the
parallel_divisor obtained from get_parallel_divisor(), which accounts
for the leader's contribution in addition to the number of workers.
Accordingly, when estimating the number of tuples for gather (merge)
nodes, we should multiply the number of tuples per worker by the same
parallel_divisor to reverse the division. However, currently we use
parallel_workers rather than parallel_divisor for the multiplication.
This could result in an underestimation of the number of tuples for
gather (merge) nodes, especially when there are fewer than four
workers.
This patch fixes this issue by using the same parallel_divisor for the
multiplication. There is one ensuing plan change in the regression
tests, but it looks reasonable and does not compromise its original
purpose of testing parallel-aware hash join.
In passing, this patch removes an unnecessary assignment for path.rows
in create_gather_merge_path, and fixes an uninitialized-variable issue
in generate_useful_gather_paths.
No backpatch as this could result in plan changes.
Author: Anthonin Bonnefoy
Reviewed-by: Rafia Sabih, Richard Guo
Discussion: https://postgr.es/m/CAO6_Xqr9+51NxgO=XospEkUeAg-p=EjAWmtpdcZwjRgGKJ53iA@mail.gmail.com
We were not being clear about which variants of the "direction"
clause are permitted in MOVE. Also, the text seemed to be
written with only the FETCH/MOVE NEXT case in mind, so it
didn't apply very well to other variants.
Also, document that "MOVE count IN cursor" only works if count
is a constant. This is not the whole truth, because some other
cases such as a parenthesized expression will also work, but
we want to push people to use "MOVE FORWARD count" instead.
The constant case is enough to cover what we allow in plain SQL,
and that seems sufficient to claim support for.
Update a comment in pl_gram.y claiming that we don't document
that point.
Per gripe from Philipp Salvisberg.
Discussion: https://postgr.es/m/172155553388.702.7932496598218792085@wrigleys.postgresql.org
This reverts commit aa607980aee08416211f003ab41aa750f5559712.
This test proved to be unstable on the buildfarm, timing out before the
standby could catch up on 32-bit machines where more rows were required
and failing to reliably trigger multiple index vacuum rounds on 64-bit
machines where fewer rows should be required.
Because the instability is only known to be present on versions of
Postgres with TIDStore used for dead TID storage by vacuum, this is only
being reverted on master and REL_17_STABLE.
As having this coverage may be valuable, there is a discussion on the
thread of possible ways to stabilize the test. If that happens, a fixed
test can be committed again.
Backpatch-through: 17
Reported-by: Tom Lane
Discussion: https://postgr.es/m/614152.1721580711%40sss.pgh.pa.us
Previously, the code charged disable_cost for CurrentOfExpr, and then
subtracted disable_cost from the cost of a TID path that used
CurrentOfExpr as the TID qual, effectively disabling all paths except
that one. Now, we instead suppress generation of the disabled paths
entirely, and generate only the one that the executor will actually
understand.
With this approach, we do not need to rely on disable_cost being
large enough to prevent the wrong path from being chosen, and we
save some CPU cycle by avoiding generating paths that we can't
actually use. In my opinion, the code is also easier to understand
like this.
Patch by me. Review by Heikki Linnakangas.
Discussion: http://postgr.es/m/591b3596-2ea0-4b8e-99c6-fad0ef2801f5@iki.fi
After calling ConditionVariableSleep() or ConditionVariableTimedSleep()
one or more times, code is supposed to call ConditionVariableCancelSleep()
to remove itself from the waitlist. This code neglected to do so.
As far as I know, that had no observable consequences, but let's make
the code correct.
Discussion: http://postgr.es/m/CA+TgmoYW8eR+KN6zhVH0sin7QH6AvENqw_bkN-bB4yLYKAnsew@mail.gmail.com
strtok() considers adjacent delimiters to be one delimiter, which is
arguably the wrong behavior in some cases. Replace with strsep(),
which has the right behavior: Adjacent delimiters create an empty
token.
Affected by this are parsing of:
- Stored SCRAM secrets
("SCRAM-SHA-256$<iterations>:<salt>$<storedkey>:<serverkey>")
- ICU collation attributes
("und@colStrength=primary;colCaseLevel=yes") for ICU older than
version 54
- PG_COLORS environment variable
("error=01;31:warning=01;35:note=01;36:locus=01")
- pg_regress command-line options with comma-separated list arguments
(--dbname, --create-role) (currently only used pg_regress_ecpg)
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Reviewed-by: David Steele <david@pgmasters.net>
Discussion: https://www.postgresql.org/message-id/flat/79692bf9-17d3-41e6-b9c9-fc8c3944222a@eisentraut.org
This allows us to skip it in Cygwin, where it's reportedly flaky because
of platform bugs or something.
Backpatch to 17, where the test was introduced by commit 2466d6654f85.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/e4d0cb33-6be5-e4d5-ae49-9eac3ff2b005@gmail.com
One test case added in 22d946b0f verifies the plan of a non-parallel
nestloop join. The planner's choice of join order is arbitrary, and
slight variations in underlying statistics could result in a different
displayed plan. To stabilize the test result, here we enforce the
join order using a lateral join.
While here, modify the test case to verify that parallel nestloop join
is not generated if the inner path is not parallel-safe, which is what
we wanted to test in 22d946b0f.
Reported-by: Alexander Lakhin as per buildfarm
Author: Richard Guo
Discussion: https://postgr.es/m/7c09a439-e48d-5460-cfa0-a371b1a57066@gmail.com
This new error code, named file_name_too_long, maps internally to the
errno ENAMETOOLONG to produce a proper error code rather than an
internal code under errcode_for_file_access(). This error code can be
reached with some SQL command patterns, like a snapshot file name.
Reported-by: Alexander Lakhin
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/Zo4ROR9mgy8bowMo@paquier.xyz
Particularly on windows it's useful to look up dependencies via cmake, instead
of pkg-config. Meson supports doing so. Unfortunately the dependency names
used by various projects often differs between their pkg-config and cmake
files.
This would look a lot neater if we could rely on meson >= 0.60.0...
Reviewed-by: Tristan Partin <tristan@partin.io>
Discussion: https://postgr.es/m/20240709065101.xhc74r3mdg2lmn4w@awork3.anarazel.de
Backpatch: 16-, where meson support was added
This is necessary as ossp-uuid on windows installs neither a pkg-config nor a
cmake dependency information. Nor is there another supported uuid
implementation available on windows.
Reported-by: Dave Page <dpage@pgadmin.org>
Reviewed-by: Tristan Partin <tristan@partin.io>
Discussion: https://postgr.es/m/20240709065101.xhc74r3mdg2lmn4w@awork3.anarazel.de
Backpatch: 16-, where meson support was added
This is required as MIT Kerberos does provide neither pkg-config nor cmake
dependency information on windows.
Reported-by: Dave Page <dpage@pgadmin.org>
Reviewed-by: Tristan Partin <tristan@partin.io>
Discussion: https://postgr.es/m/20240709065101.xhc74r3mdg2lmn4w@awork3.anarazel.de
Backpatch: 16-, where meson support was added
These were missing since the initial introduction of the meson based build, in
e6927270cd18. As-is this is unlikely to cause an issue, but a future commit
will add support for detecting gssapi without use of dependency(), which could
fail due to this.
Discussion: https://postgr.es/m/20240708225659.gmyqoosi7km6ysgn@awork3.anarazel.de
Backpatch: 16-, where the meson based build was added
If a view has some updatable and some non-updatable columns, we failed
to verify updatability of any columns for which an INSERT or UPDATE
on the view explicitly specifies a DEFAULT item (unless the view has
a declared default for that column, which is rare anyway, and one
would almost certainly not write one for a non-updatable column).
This would lead to an unexpected "attribute number N not found in
view targetlist" error rather than the intended error.
Per bug #18546 from Alexander Lakhin. This bug is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/18546-84a292e759a9361d@postgresql.org
While this doesn't significantly change runtime now, it arranges for
STRATEGY=WAL_LOG to benefit automatically from future optimizations to
the read_stream subsystem. For large tables in the template database,
this does read 16x as many bytes per system call. Platforms with high
per-call overhead, if any, may see an immediate benefit.
Nazir Bilal Yavuz
Discussion: https://postgr.es/m/CAN55FZ0JKL6vk1xQp6rfOXiNFV1u1H0tJDPPGHWoiO3ea2Wc=A@mail.gmail.com
Reaching that code would have required multiple processes performing
relation extension during recovery, which does not happen. That caller
has the persistence available, so pass it. This was dead code as soon
as commit 210622c60e1a9db2e2730140b8106ab57d259d15 added it.
Discussion: https://postgr.es/m/CAN55FZ0JKL6vk1xQp6rfOXiNFV1u1H0tJDPPGHWoiO3ea2Wc=A@mail.gmail.com
If vacuum fails to prune a tuple killed before OldestXmin, it will
decide to freeze its xmax and later error out in pre-freeze checks.
Add a test reproducing this scenario to the recovery suite which creates
a table on a primary, updates the table to generate dead tuples for
vacuum, and then, during the vacuum, uses a replica to force
GlobalVisState->maybe_needed on the primary to move backwards and
precede the value of OldestXmin set at the beginning of vacuuming the
table.
This commit is separate from the fix in case there are test stability
issues.
Author: Melanie Plageman
Reviewed-by: Peter Geoghegan
Discussion: https://postgr.es/m/CAAKRu_apNU2MPBK96V%2BbXjTq0RiZ-%3DA4ZTaysakpx9jxbq1dbQ%40mail.gmail.com
If vacuum fails to remove a tuple with xmax older than
VacuumCutoffs->OldestXmin and younger than GlobalVisState->maybe_needed,
it may attempt to freeze the tuple's xmax and then ERROR out in
pre-freeze checks with "cannot freeze committed xmax".
Fix this by having vacuum always remove tuples older than OldestXmin.
It is possible for GlobalVisState->maybe_needed to precede OldestXmin if
maybe_needed is forced to go backward while vacuum is running. This can
happen if a disconnected standby with a running transaction older than
VacuumCutoffs->OldestXmin reconnects to the primary after vacuum
initially calculates GlobalVisState and OldestXmin.
In back branches starting with 14, the first version using
GlobalVisState, failing to remove tuples older than OldestXmin during
pruning caused vacuum to infinitely loop in lazy_scan_prune(), as
investigated on this [1] thread. After 1ccc1e05ae removed the retry loop
in lazy_scan_prune() and stopped comparing tuples to OldestXmin, the
hang could no longer happen, but we could still attempt to freeze dead
tuples with xmax older than OldestXmin -- resulting in an ERROR.
Fix this by always removing dead tuples with xmax older than
VacuumCutoffs->OldestXmin. This is okay because the standby won't replay
the tuple removal until the tuple is removable. Thus, the worst that can
happen is a recovery conflict.
[1] https://postgr.es/m/20240415173913.4zyyrwaftujxthf2%40awork3.anarazel.de#1b216b7768b5bd577a3d3d51bd5aadee
Back-patch through 14
Author: Melanie Plageman
Reviewed-by: Peter Geoghegan, Robert Haas, Andres Freund, Heikki Linnakangas, and Noah Misch
Discussion: https://postgr.es/m/CAAKRu_bDD7oq9ZwB2OJqub5BovMG6UjEYsoK2LVttadjEqyRGg%40mail.gmail.com
Only the LLVM specific code uses it since resource owners were made
extensible in commit b8bff07daa85c837a2747b4d35cd5a27e73fb7b2. This is
new in v17, so backpatch there to keep the branches from diverging
just yet.
Author: Andreas Karlsson <andreas@proxel.se>
Discussion: https://www.postgresql.org/message-id/fd3a2a00-6605-4e30-a118-48418b478e6e@proxel.se
There was no coverage for the code path to unwrap an array before
applying ".*" to it, so add tests to provide more coverage for both
objects and arrays.
This shows, for example, that no results are returned for an array of
scalars, and what results are returned when the array contains an
object. A few more scenarios are covered with the strict/lax modes and
the operator "@?".
Author: David Wheeler
Reported-by: David G. Johnston, Stepan Neretin
Discussion: https://postgr.es/m/A95346F9-6147-46E0-809E-532A485D71D6@justatheory.com
Commit d844cd75a disallowed rewind in a non-scrollable cursor to resolve
anomalies arising from such a cursor operation. However, this failed to
take into account the assumption in postgres_fdw that when rescanning a
foreign relation, it can rewind the cursor created for scanning the
foreign relation without specifying the SCROLL option, regardless of its
scrollability, causing this error when it tried to do such a rewind in a
non-scrollable cursor. Fix by modifying postgres_fdw to instead
recreate the cursor, regardless of its scrollability, when rescanning
the foreign relation. (If we had a way to check its scrollability, we
could improve this by rewinding it if it is scrollable and recreating it
if not, but we do not have it, so this commit modifies it to recreate it
in any case.)
Per bug #17889 from Eric Cyr. Devrim Gunduz also reported this problem.
Back-patch to v15 where that commit enforced the prohibition.
Reviewed by Tom Lane.
Discussion: https://postgr.es/m/17889-e8c39a251d258dda%40postgresql.org
Discussion: https://postgr.es/m/b415ac3255f8352d1ea921cf3b7ba39e0587768a.camel%40gunduz.org
For utility statements defined within a function, the query tree is
copied to a PlannedStmt as utility commands do not require planning.
However, the query ID was missing from the information passed down.
This leads to plugins relying on the query ID like pg_stat_statements to
not be able to track utility statements within function calls. Tests
are added to check this behavior, depending on pg_stat_statements.track.
This is an old bug. Now, query IDs for utilities are compiled using
their parsed trees rather than the query string since v16
(3db72ebcbe20), leading to less bloat with utilities, so backpatch down
only to this version.
Author: Anthonin Bonnefoy
Discussion: https://postgr.es/m/CAO6_XqrGp-uwBqi3vBPLuRULKkddjC7R5QZCgsFren=8E+m2Sg@mail.gmail.com
Backpatch-through: 16