
Declarative partitioning may sound complex, but in reality it’s just a way to tell your database how best to organize large tables — so it can optimize queries and make maintenance easier. Let’s walk through how it works and when declarative partitioning can save the day.
What is partitioning and when you might need it
Partitioning is the process of splitting a single logical table into separate physical parts (partitions). The split is done row-wise, based on the value of a specific column or set of columns called the partition key.
For example, let’s say you have a table named city
, and you partition it into city_msk
, city_spb
, and city_nsk
. In this case, the city_code
column is the partition key.

city
In a partitioned table like city
, no rows are actually stored in the main table — they’re all stored in the partitions. Which partition a row ends up in depends on the city code. If there’s no suitable partition, the row goes into a DEFAULT
partition. If there’s no default, the system throws a “no partition found for row” error.

city
From the application’s point of view, partitioning is transparent. All read and write operations go through the parent table. The database planner determines which partition is used for reading or writing.
Partitioning helps:
Boost query performance: the query planner skips partitions that don’t contain matching rows.
Make large-table maintenance easier: tasks like
VACUUM FULL
,REINDEX
, orpg_dump
run faster on individual partitions than on a monolithic table.Manage data flexibly: for example, move historical or rarely used data to cheaper storage. More on that in this article.
Declarative or not? Understanding declarative partitioning
This mechanism lets you declare from the start that a table is partitioned.
In the create table
statement, you specify the partitioning method (e.g. partition by range
) and list the columns that make up the partition key.
Transactional DDL makes this a powerful tool for managing partitioned tables and enables automation.
What transactional DDL means
When creating a table, the following properties are guaranteed:
Atomicity: if the transaction fails, there’s no leftover junk to clean up.
Consistency: table creation won’t break data integrity or cause conflicts.
Isolation: the operation runs separately from other transactions.
Durability: once created, the table’s metadata is safely stored on disk.
CREATE TABLE measurement (
city_id int not null,
logdate date not null,
) PARTITION BY RANGE (logdate);
CREATE TABLE measurement_y2006m02 PARTITION OF measurement
FOR VALUES FROM ('2006-02-01') TO ('2006-03-01');
CREATE TABLE measurement_y2006m03 PARTITION OF measurement
FOR VALUES FROM ('2006-03-01') TO ('2006-04-01');
But declarative partitioning does come with a few limitations:
All partitions must have the same set of columns as the parent table.
The partition key must be a subset of the primary key, if one exists. That means if the table has a
PRIMARY KEY
, you can only partition it by the columns that are included in that key.

before row
triggers forinsert
cannot change which partition a row goes to.You cannot declaratively partition an existing table.
How to partition an existing table
There are two options:
Attach the existing table as a historical partition to a new partitioned table
First, create a new partitioned table with the same structure:
create table measurement_p (like measurement) partition by range (logdate);
And a future partition:
create table measurements_2025 partition of measurements_p for values from ('2025-01-01') to ('2026-01-01');
Prepare the old table to avoid long locks:
begin;
set local statement_timeout to '1s';
alter table measurements add constraint measurements_partbound_check check (logdate < '2025-01-01' and created_at is not null) not valid;
commit;
alter table measurements validate constraint measurements_partbound_check;

Rename the tables and attach the old one as a partition:
begin;
set statement_timeout to '1s';
alter table measurements rename to measurements_history;
alter table measurements_p rename to measurements;
alter table measurements attach partition measurements_archive
for values from (MINVALUE) to ('2025-01-01');
commit;
Now the system switches over in under a second. Data before Jan 1, 2025 goes to the old table, and new data goes to the partitioned one.

2. Fully repartition the existing table
If you need to physically split the data, use this approach. Create a new partitioned table with a different name and define the necessary partitions.
Copy the data from the old table:
INSERT INTO measurement_p SELECT * FROM measurement
or:
COPY TO measurement_p COPY FROM measurement
Since the transfer might take time, we don’t want to block access to the original table. To keep things in sync, create insert
, update
, and delete
triggers on the old table to replicate changes into the new one during the transfer.
After the data is moved, rename the new partitioned table to match the old name.

Partitioning by hash, list, and range
Declarative partitioning supports three methods:
by hash
by list of values
by range
When working with partitioned tables, there’s an optimization called partition pruning. To enable pruning, the planner needs context about how the table is partitioned. The system catalog stores information about how many partitions exist, which ranges they cover, and so on. When a query is sent to the database with a predicate (a WHERE
condition), the planner can use this metadata to exclude unnecessary partitions from the query plan.
To enable this behavior, the parameter enable_partition_pruning
must be set to on
.
Let’s take a closer look at the partitioning methods.
1. Hash partitioning
CREATE TABLE orders_1 PARTITION OF orders FOR VALUES WITH (MODULUS 3, REMAINDER 0);
CREATE TABLE orders_2 PARTITION OF orders FOR VALUES WITH (MODULUS 3, REMAINDER 1);
CREATE TABLE orders_3 PARTITION OF orders FOR VALUES WITH (MODULUS 3, REMAINDER 2);
Hash partitioning is typically used when you need to split a table into equally sized partitions. Since hash function values are distributed uniformly, the rows are also distributed evenly across partitions. The partitioning key is determined by the remainder of the key’s numeric hash divided by the total number of partitions.
If you imagine all possible key values arranged along a line (the allowed value range), hash partitioning can be visualized like this:

Comparison operations like greater than or less than do not apply to hash values, so partition pruning works only for equality joins and queries with strict equality conditions.
The number of partitions is fixed at creation time. While you cannot change the total count afterward, you can attach or detach partitions dynamically.
2. List partitioning
CREATE TABLE part_1 PARTITION OF city FOR VALUES IN ('MSK');
CREATE TABLE part_2 PARTITION OF city FOR VALUES IN ('NSK');
CREATE TABLE part_3 PARTITION OF city FOR VALUES IN ('SPB’);
Here, the partition key matches one of the listed values.

3. Range partitioning
CREATE TABLE part_1 PARTITION OF sales FOR VALUES FROM ('2006-01-01') TO ('2010-01-01');
CREATE TABLE part_2 PARTITION OF sales FOR VALUES FROM ('2013-01-01') TO ('2016-01-01');
CREATE TABLE part_3 PARTITION OF sales FOR VALUES FROM ('2016-01-01') TO ('2020-01-01’);
Range partitioning is the most commonly used method. In this approach, key values are matched to specified intervals, which can vary in size and may have gaps between them. When creating each partition, you define the range it covers to determine which values belong to that partition.
Each partition’s range is defined by two values: a start (“from”) and an end (“to”). The start value is inclusive, while the end value is exclusive. Therefore, to create two contiguous partitions without gaps, the end value of the first partition should be the start value of the next one.

Vanilla partitioning tools
Let’s look at attach
, detach
, default
, and multi-level partitioning.
Attach and detach
ATTACH
lets you add a table as a partition to a partitioned table.
When attaching a table as a partition, you must specify which partition key values it covers according to the partitioning method. For range partitioning — the interval, for list partitioning — the set of values, for hash partitioning — the remainder value.
ALTER TABLE measurement ATTACH PARTITION measurement_2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01' );

This operation requires a lock. When attaching a partition, a corresponding CHECK
constraint is created to ensure that the partition does not contain values outside the specified boundaries. You can reduce lock time by creating this constraint manually in advance.
DETACH
removes a partition, turning it into a standalone table:
ALTER TABLE measurement DETACH PARTITION measurement_2023;

Default
The default partition catches rows that don’t match any other partition:
CREATE TABLE … PARTITION OF … DEFAULT ;
Multi-level partitioning
Multi-level partitioning (or sub-partitioning) means that partitions themselves are partitioned tables.
Since you cannot declaratively partition an existing table, to build a multi-level structure you must first create a partitioned table and then use attach
to add it as a partition to another partitioned table. The partitioning methods can differ between levels.

Replicating partitioned tables
Physical replication creates a complete copy of the database on a standby server. Logical replication is more flexible: you create a publication on selected tables on one server and subscribe to it from others.
Logical replication is typically used in two cases:
Data consolidation. Suppose you have a central server and several regional servers. Each regional server publishes changes, and the central one subscribes to them. New data appearing in regional tables is pushed to the central server, where it is consolidated.
Updating reference data dictionaries. For instance, you store reference data centrally and want to push it to regional nodes. The central server publishes the table, and others subscribe to it.
create publication name [ with ( publish_via_partition_root = on|off [, ...] ) ]
The publish_via_partition_root
parameter controls how partitioned tables are replicated to subscribers.
When publish_via_partition_root = off
, insert, update, delete, and truncate commands are replicated using the names of the partitions the rows belong to. For example, if an insert targets the city
table with value msk
, the subscriber receives an insert into city_msk
, as if each partition were published separately. Subscribers must have matching tables, but they don't need to be part of a partitioned city
table — standalone tables work too.

publish_via_partition_root = off
When the option is enabled (publish_via_partition_root = on
), all changes are replicated through the main (parent) table, not through its individual partitions.
For example, if you have a partitioned table city
and you insert new data into it using an INSERT
command, those changes will be sent to subscribers as operations on the city
table, not on its specific partitions.
This means that on the subscriber side, the city
table may be partitioned differently or not partitioned at all. The subscriber will still be able to receive and apply changes, even if its partitioning structure differs from that of the publisher.

publish_via_partition_root = on
Advanced partition operations in Postgres Pro
Let’s look at additional operations available for partitions in Postgres Pro.
1. SPLIT lets you divide a partition into smaller parts:
alter table measurement
split partition partition_name
into (
partition partition_name_1 { for values ... | default },
partition partition_name_2 { for values ... | default }
);
For example, if the 2024 partition grew too large, you can split it into quarterly partitions.

The operation requires an exclusive lock and must meet these conditions:
If there is no DEFAULT partition, the combined range of new partitions must fully cover the original partition’s range.
If splitting the DEFAULT partition, the resulting set must include a DEFAULT as well.
2. MERGE combines multiple partitions into one:
alter table measurement
merge partitions (partition_1, partition_2)
into merged_partition;

3. Pgpro_autopart extension — on-demand auto-partitioning
Partitioning is an extremely useful tool, but to work with it, you need to prepare partitions in advance by creating them manually. This adds a significant workload for DBAs, considering that partitions are usually counted in the hundreds. Pgpro_autopart allows partitions to be created dynamically when a row is inserted and a suitable partition does not exist.
The extension includes a function, ap_enable_automatic_partition_creation, where you need to specify the table name (tablename), the interval (interval), and the starting value (first_value) — the reference point from which intervals will be offset.
Pgpro_autopart works only with new tables because it cannot track partitions that were created earlier.
The ap_tables_view view allows you to monitor tables with automatic partitioning enabled.
The extension is available since version 17 of Postgres Pro and replaces the now deprecated pg_pathman functionality.
How the pgpro_autopart extension works
Let’s say we want to create a table that supports automatic partition creation.
Step 1. Create an empty table measurement partitioned by id using range:
create table measurement (
id int,
...
) partition by range (id);

Step 2. Call the function from the extension, specifying the table name, interval, and start value:
select ap_enable_automatic_partition_creation('measurement', 10, 100);
where 10 is the interval, and 100 is the start value.
At this point, the table is automatically renamed with the prefix real.
A view is created with the old table’s name —
measurement
. The view is defined asSELECT * FROM real_measurement
. It is needed so that the trigger on this view can check whether a suitable partition exists — since such logic cannot be implemented directly in a trigger on the table itself. Importantly, all this happens transparently for the application; the application will now simply query the view.An INSTEAD OF INSERT, UPDATE trigger is automatically created, which dynamically creates partitions when no suitable partition is found for the inserted row.

Step 3. Insert the first row with id = 111:
INSERT INTO measurement VALUES (111, ‘text’ );
NOTICE: New partition "public"."real_measurement_110_120" created
At the same time, the first partition is created automatically. Its name includes the boundaries of the interval it covers:

Step 4. If you later UPDATE the row and change id
from 111
to -55
, a new partition is created automatically.
UPDATE measurement SET id = -55 WHERE id = 111 RETURNING *;
NOTICE: New partition "public"."real_measurement_-60_-50" created
UPDATE, INSERT, DELETE with RETURNING are handled correctly.

What’s next
In Q1 2025, global indexes were introduced. These are shared indexes across all partitions, supporting uniqueness constraints. You can now create a unique constraint on any column, not just the partition key.

In the first quarter of 2026, reference (or foreign key) partitioning will be introduced. In this method, a created table is partitioned not explicitly by its own PRIMARY KEY, but through a reference to the parent table via a FOREIGN KEY, using the same key ranges as in the parent table. This way, the partitions of the parent and child tables become linked, and when planning a query, identifying one partition allows us to determine the related partition without scanning the entire hierarchy.

When a partition is attached (ATTACH) to the parent table, it is also attached to the child table; similarly, when it is detached (DETACH) from the parent, it is detached from the child table as well.

Example of reference partitioning
Suppose there is an ORDERS table partitioned by intervals (months). When creating a new table, we can reference the parent table as a reference, and the child partitioned table will be divided into the same intervals. It will have as many partitions as the parent table. Moreover, when new partitions are added to the parent table, they will also appear in the child table.

That's all. Drop us a comment if you have questions.