Pull to refresh
142.19
Postgres Professional
Разработчик СУБД Postgres Pro
Show first

JSONPath in PostgreSQL: committing patches and selecting apartments

Reading time 10 min
Views 26K

This article was written in Russian in 2019 after the PostgreSQL 12 feature freeze, and it is still up-to-date. Unfortunately other patches of the SQL/JSON will not get even into version 13.
Many thanks to Elena Indrupskaya for the translation.

JSONPath


All that relates to JSON(B) is relevant and of high demand in the world and in Russia, and it is one of the key development areas in Postgres Professional. The jsonb type, as well as functions and operators to manipulate JSON/JSONB, appeared as early as in PostgreSQL 9.4. They were developed by the team lead by Oleg Bartunov.

The SQL/2016 standard provides for JSON usage: the standard mentions JSONPath — a set of functionalities to address data inside JSON; JSONTABLE — capabilities for conversion of JSON to usual database tables; a large family of functions and operators. Although JSON has long been supported in Postgres, in 2017 Oleg Bartunov with his colleagues started their work to support the standard. Of all described in the standard, only one patch, but a critical one, got into version 12; it is JSONPath, which we will, therefore, describe here.
Read more →
Total votes 6: ↑6 and ↓0 +6
Comments 5

What is Baked in the Baker's Dozen?

Reading time 31 min
Views 1.6K
On April 8, PostgreSQL feature freeze took place, so only features committed earlier will get into version PostgreSQL 13. Probably, this version can hardly be considered revolutionary, since it has no conceptual changes. Some of critical patches were late to get into it, such as Table and Functions for the JSON/SQL standard, which had been desirable to be part of PostgreSQL 12, along with the JSONPath patch; plug-in warehouses did not appear either — only the interface is being finalized. The list of improvements is still impressive. We prepared a pretty complete overview of the patches included in the Baker's Dozen.
Read more →
Total votes 6: ↑6 and ↓0 +6
Comments 0

WAL in PostgreSQL: 4. Setup and Tuning

Reading time 17 min
Views 8.8K
So, we got acquainted with the structure of the buffer cache and in this context concluded that if all the RAM contents got lost due to failure, the write-ahead log (WAL) was required to recover. The size of the necessary WAL files and the recovery time are limited thanks to the checkpoint performed from time to time.

In the previous articles we already reviewed quite a few important settings that anyway relate to WAL. In this article (being the last in this series) we will discuss problems of WAL setup that are unaddressed yet: WAL levels and their purpose, as well as the reliability and performance of write-ahead logging.

WAL levels


The main WAL task is to ensure recovery after a failure. But once we have to maintain the log anyway, we can also adapt it to other tasks by adding some more information to it. There are several logging levels. The wal_level parameter specifies the level, and each next level includes everything that gets into WAL of the preceding level plus something new.
Read more →
Total votes 2: ↑2 and ↓0 +2
Comments 0

WAL in PostgreSQL: 3. Checkpoint

Reading time 11 min
Views 6.5K
We already got acquainted with the structure of the buffer cache — one of the main objects of the shared memory — and concluded that to recover after failure when all the RAM contents get lost, the write-ahead log (WAL) must be maintained.

The problem yet unaddressed, where we left off last time, is that we are unaware of where to start playing back WAL records during the recovery. To begin from the beginning, as the King from Lewis Caroll's Alice advised, is not an option: it is impossible to keep all the WAL records from the server start — this is potentially both a huge memory size and equally huge duration of the recovery. We need such a point that is gradually moving forward and that we can start the recovery at (and safely remove all the previous WAL records, accordingly). And this is the checkpoint, to be discussed below.

Checkpoint


What features must the checkpoint have? We must be sure that all the WAL records starting with the checkpoint will be applied to the pages flushed to disk. If it were not the case, during recovery, we could read from disk a version of the page that is too old, apply the WAL record to it and by doing so, irreversibly hurt the data.
Read more →
Total votes 4: ↑3 and ↓1 +2
Comments 0

WAL in PostgreSQL: 2. Write-Ahead Log

Reading time 8 min
Views 6.9K
Last time we got acquainted with the structure of an important component of the shared memory — the buffer cache. A risk of losing information from RAM is the main reason why we need techniques to recover data after failure. Now we will discuss these techniques.

The log


Sadly, there's no such thing as miracles: to survive the loss of information in RAM, everything needed must be duly saved to disk (or other nonvolatile media).

Therefore, the following was done. Along with changing data, the log of these changes is maintained. When we change something on a page in the buffer cache, we create a record of this change in the log. The record contains the minimum information sufficient to redo the change if the need arises.

For this to work, the log record must obligatory get to disk before the changed page gets there. And this explains the name: write-ahead log (WAL).

In case of failure, the data on disk appear to be inconsistent: some pages were written earlier, and others later. But WAL remains, which we can read and redo the operations that were performed before the failure but their result was late to reach the disk.
Read more →
Total votes 3: ↑3 and ↓0 +3
Comments 0

WAL in PostgreSQL: 1. Buffer Cache

Reading time 13 min
Views 7.1K
The previous series addressed isolation and multiversion concurrency control, and now we start a new series: on write-ahead logging. To remind you, the material is based on training courses on administration that Pavel pluzanov and I are creating (mostly in Russian, although one course is available in English), but does not repeat them verbatim and is intended for careful reading and self-experimenting.

This series will consist of four parts:


Many thanks to Elena Indrupskaya for the translation of these articles into English.

Why do we need write-ahead logging?


Part of the data that a DBMS works with is stored in RAM and gets written to disk (or other nonvolatile storage) asynchronously, i. e., writes are postponed for some time. The more infrequently this happens the less is the input/output and the faster the system operates.

But what will happen in case of failure, for example, power outage or an error in the code of the DBMS or operating system? All the contents of RAM will be lost, and only data written to disk will survive (disks are not immune to certain failures either, and only a backup copy can help if data on disk are affected). In general, it is possible to organize input/output in such a way that data on disk are always consistent, but this is complicated and not that much efficient (to my knowledge, only Firebird chose this option).

Usually, and specifically in PostgreSQL, data written to disk appear to be inconsistent, and when recovering after failure, special actions are required to restore data consistency. Write-ahead logging (WAL) is just a feature that makes it possible.
Read more →
Total votes 4: ↑3 and ↓1 +2
Comments 0

On recursive queries

Reading time 25 min
Views 10K
This article deals with writing recursive queries. This topic was brought up routinely, but the discussion was usually limited to simple cases related to trees: to descend from a vertex to the leaves and to ascend from a vertex to the root. We will address a more complicated case of an arbitrary graph.

Let's start with recalling the theory (very briefly since all of it is trivial), and then we will discuss what to do if it is unclear how to approach a real-life problem or if it seems to be clear, but the query persistently fails to work fine.

For an exercise, we will use the airlines demo database and try to write a query to find the shortest route from one airport to another.
Read more →
Rating 0
Comments 0

MVCC in PostgreSQL-8. Freezing

Reading time 12 min
Views 5.4K
We started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.

Then we covered different vacuuming techniques: in-page vacuum (along with HOT updates), vacuum and autovacuum.

Now we've reached the last topic of this series. We will talk on the transaction id wraparound and freezing.
Read more →
Total votes 1: ↑1 and ↓0 +1
Comments 0

MVCC in PostgreSQL-7. Autovacuum

Reading time 10 min
Views 2.4K
To remind you, we started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.

Then we explored in-page vacuum (and HOT updates) and vacuum. Now we'll look into autovacuum.

Autovacuum


We've already mentioned that normally (i. e., when nothing holds the transaction horizon for a long time) VACUUM usually does its job. The problem is how often to call it.

If we vacuum a changing table too rarely, its size will grow more than desired. Besides, a next vacuum operation may require several passes through indexes if too many changes were done.

If we vacuum the table too often, the server will constantly do maintenance rather than useful work — and this is no good either.

Note that launching VACUUM on schedule by no means resolves the issue because the workload can change with time. If the table starts to change more intensively, it must be vacuumed more often.

Autovacuum is exactly the technique that enables us to launch vacuuming depending on how intensive the table changes are.
Read more →
Total votes 2: ↑2 and ↓0 +2
Comments 0

MVCC in PostgreSQL-6. Vacuum

Reading time 13 min
Views 3.4K
We started with problems related to isolation, made a digression about low-level data structure, then discussed row versions and observed how data snapshots are obtained from row versions.

Last time we talked about HOT updates and in-page vacuuming, and today we'll proceed to a well-known vacuum vulgaris. Really, so much has already been written about it that I can hardly add anything new, but the beauty of a full picture requires sacrifice. So keep patience.

Vacuum


What does vacuum do?


In-page vacuum works fast, but frees only part of the space. It works within one table page and does not touch indexes.

The basic, «normal» vacuum is done using the VACUUM command, and we will call it just «vacuum» (leaving «autovacuum» for a separate discussion).

So, vacuum processes the entire table. It vacuums away not only dead tuples, but also references to them from all indexes.

Vacuuming is concurrent with other activities in the system. The table and indexes can be used in a regular way both for reads and updates (however, concurrent execution of commands such as CREATE INDEX, ALTER TABLE and some others is impossible).

Only those table pages are looked through where some activities took place. To detect them, the visibility map is used (to remind you, the map tracks those pages that contain pretty old tuples, which are visible in all data snapshots for sure). Only those pages are processed that are not tracked by the visibility map, and the map itself gets updated.

The free space map also gets updated in the process to reflect the extra free space in the pages.
Read more →
Total votes 1: ↑1 and ↓0 +1
Comments 1

MVCC in PostgreSQL-5. In-page vacuum and HOT updates

Reading time 9 min
Views 4.4K
Just to remind you, we already discussed issues related to isolation, made a digression regarding low-level data structure, and then explored row versions and observed how data snapshots are obtained from row versions.

Now we will proceed to two closely connected problems: in-page vacuum и HOT updates. Both techniques can be referred to optimizations; they are important, but virtually not covered in the documentation.

In-page vacuum during regular updates


When accessing a page for either an update or read, if PostgreSQL understands that the page is running out of space, it can do a fast in-page vacuum. This happens in either of the cases:

  1. A previous update in this page did not find enough space to allocate a new row version in the same page. Such a situation is remembered in the page header, and next time the page is vacuumed.
  2. The page is more than fillfactor percent full. In this case, vacuum is performed right away without putting off till next time.
Read more →
Rating 0
Comments 0

MVCC in PostgreSQL-4. Snapshots

Reading time 9 min
Views 6.3K
After having discussed isolation problems and having made a digression regarding the low-level data structure, last time we explored row versions and observed how different operations changed tuple header fields.

Now we will look at how consistent data snapshots are obtained from tuples.

What is a data snapshot?


Data pages can physically contain several versions of the same row. But each transaction must see only one (or none) version of each row, so that all of them make up a consistent picture of the data (in the sense of ACID) as of a certain point in time.

Isolation in PosgreSQL is based on snapshots: each transaction works with its own data snapshot, which «contains» data that were committed before the moment the snapshot was created and does not «contain» data that were not committed by that moment yet. We've already seen that although the resulting isolation appears stricter than required by the standard, it still has anomalies.
Read more →
Total votes 4: ↑4 and ↓0 +4
Comments 0

MVCC in PostgreSQL-3. Row Versions

Reading time 13 min
Views 7.9K
Well, we've already discussed isolation and made a digression regarding the low-level data structure. And we've finally reached the most fascinating thing, that is, row versions (tuples).

Tuple header


As already mentioned, several versions of each row can be simultaneously available in the database. And we need to somehow distinguish one version from another one. To this end, each version is labeled with its effective «time» (xmin) and expiration «time» (xmax). Quotation marks denote that a special incrementing counter is used rather than the time itself. And this counter is the transaction identifier.

(As usual, in reality this is more complicated: the transaction ID cannot always increment due to a limited bit depth of the counter. But we will explore more details of this when our discussion reaches freezing.)
Read more →
Total votes 7: ↑7 and ↓0 +7
Comments 0

MVCC in PostgreSQL-2. Forks, files, pages

Reading time 11 min
Views 5.2K
Last time we talked about data consistency, looked at the difference between levels of transaction isolation from the point of view of the user and figured out why this is important to know. Now we are starting to explore how PostgreSQL implements snapshot isolation and multiversion concurrency.

In this article, we will look at how data is physically laid out in files and pages. This takes us away from discussing isolation, but such a digression is necessary to understand what follows. We will need to figure out how the data storage is organized at a low level.

Relations


If you look inside tables and indexes, it turns out that they are organized in a similar way. Both are database objects that contain some data consisting of rows.

There is no doubt that a table consists of rows, but this is less obvious for an index. However, imagine a B-tree: it consists of nodes that contain indexed values and references to other nodes or table rows. It's these nodes that can be considered index rows, and in fact, they are.

Actually, a few more objects are organized in a similar way: sequences (essentially single-row tables) and materialized views (essentially, tables that remember the query). And there are also regular views, which do not store data themselves, but are in all other senses similar to tables.

All these objects in PostgreSQL are called the common word relation. This word is extremely improper because it is a term from the relational theory. You can draw a parallel between a relation and a table (view), but certainly not between a relation and an index. But it just so happened: the academic origin of PostgreSQL manifests itself. It seems to me that it's tables and views that were called so first, and the rest swelled over time.
Read more →
Total votes 7: ↑7 and ↓0 +7
Comments 0

MVCC in PostgreSQL-1. Isolation

Reading time 24 min
Views 11K
Hello, Habr! With this article I start a set of series (or a series of sets? — In a word, the idea is grandiose) about the internal structure of PostgreSQL.

The material will be based on training courses (in Russian) on administration that Pavel pluzanov and I are creating. Not everyone likes to watch video (I definitely do not), and reading slides, even with comments, is no good at all.

Unfortunately, the only course available in English at the moment is 2-Day Introduction to PostgreSQL 11.

Of course, the articles will not be exactly the same as the content of the courses. I will talk only about how everything is organized, omitting the administration itself, but I will try to do it in more detail and more thoroughly. And I believe that the knowledge like this is as useful to an application developer as it is to an administrator.

I will target those who already have some experience in using PostgreSQL and at least in general understand what is what. The text will be too difficult for beginners. For example, I will not say a word about how to install PostgreSQL and run psql.

The stuff in question does not vary much from version to version, but I will use the current, 11th vanilla PostgreSQL.

The first series deals with issues related to isolation and multiversion concurrency, and the plan of the series is as follows:

  1. Isolation as understood by the standard and PostgreSQL (this article).
  2. Forks, files, pages — what is happening at the physical level.
  3. Row versions, virtual transactions and subtransactions.
  4. Data snapshots and the visibility of row versions; the event horizon.
  5. In-page vacuum and HOT updates.
  6. Normal vacuum.
  7. Autovacuum.
  8. Transaction id wraparound and freezing.

Off we go!

And before we start, I would like to thank Elena Indrupskaya for translating the articles to English.

Read more →
Total votes 8: ↑7 and ↓1 +6
Comments 0

Indexes in PostgreSQL — 10 (Bloom)

Reading time 11 min
Views 6.8K
In the previous articles we discussed PostgreSQL indexing engine and the interface of access methods, as well as hash indexes, B-trees, GiST, SP-GiST, GIN, RUM, and BRIN. But we still need to look at Bloom indexes.

Bloom


General concept


A classical Bloom filter is a data structure that enables us to quickly check membership of an element in a set. The filter is highly compact, but allows false positives: it can mistakenly consider an element to be a member of a set (false positive), but it is not permitted to consider an element of a set not to be a member (false negative).

The filter is an array of $m$ bits (also called a signature) that is initially filled with zeros. $k$ different hash functions are chosen that map any element of the set to $k$ bits of the signature. To add an element to the set, we need to set each of these bits in the signature to one. Consequently, if all the bits corresponding to an element are set to one, the element can be a member of the set, but if at least one bit equals zero, the element is not in the set for sure.

In the case of a DBMS, we actually have $N$ separate filters built for each index row. As a rule, several fields are included in the index, and it's values of these fields that compose the set of elements for each row.

By choosing the length of the signature $m$, we can find a trade-off between the index size and the probability of false positives. The application area for Bloom index is large, considerably «wide» tables to be queried using filters on each of the fields. This access method, like BRIN, can be regarded as an accelerator of sequential scan: all the matches found by the index must be rechecked with the table, but there is a chance to avoid considering most of the rows at all.
Read more →
Total votes 17: ↑16 and ↓1 +15
Comments 2

Indexes in PostgreSQL — 9 (BRIN)

Reading time 18 min
Views 8.7K
In the previous articles we discussed PostgreSQL indexing engine, the interface of access methods, and the following methods: hash indexes, B-trees, GiST, SP-GiST, GIN, and RUM. The topic of this article is BRIN indexes.

BRIN


General concept


Unlike indexes with which we've already got acquainted, the idea of BRIN is to avoid looking through definitely unsuited rows rather than quickly find the matching ones. This is always an inaccurate index: it does not contain TIDs of table rows at all.

Simplistically, BRIN works fine for columns where values correlate with their physical location in the table. In other words, if a query without ORDER BY clause returns the column values virtually in the increasing or decreasing order (and there are no indexes on that column).

This access method was created in scope of Axle, the European project for extremely large analytical databases, with an eye on tables that are several terabyte or dozens of terabytes large. An important feature of BRIN that enables us to create indexes on such tables is a small size and minimal overhead costs of maintenance.

This works as follows. The table is split into ranges that are several pages large (or several blocks large, which is the same) — hence the name: Block Range Index, BRIN. The index stores summary information on the data in each range. As a rule, this is the minimal and maximal values, but it happens to be different, as shown further. Assume that a query is performed that contains the condition for a column; if the sought values do not get into the interval, the whole range can be skipped; but if they do get, all rows in all blocks will have to be looked through to choose the matching ones among them.

It will not be a mistake to treat BRIN not as an index, but as an accelerator of sequential scan. We can regard BRIN as an alternative to partitioning if we consider each range as a «virtual» partition.

Now let's discuss the structure of the index in more detail.
Read more →
Total votes 14: ↑14 and ↓0 +14
Comments 0

Indexes in PostgreSQL — 8 (RUM)

Reading time 11 min
Views 8.3K
We have already discussed PostgreSQL indexing engine, the interface of access methods, and main access methods, such as: hash indexes, B-trees, GiST, SP-GiST, and GIN. In this article, we will watch how gin turns into rum.

RUM


Although the authors claim that gin is a powerful genie, the theme of drinks has eventually won: next-generation GIN has been called RUM.

This access method expands the concept that underlies GIN and enables us to perform full-text search even faster. In this series of articles, this is the only method that is not included in a standard PostgreSQL delivery and is an external extension. Several installation options are available for it:

  • Take «yum» or «apt» package from the PGDG repository. For example, if you installed PostgreSQL from «postgresql-10» package, also install «postgresql-10-rum».
  • Build from source code on github and install on your own (the instruction is there as well).
  • Use as a part of Postgres Pro Enterprise (or at least read the documentation from there).

Limitations of GIN


What limitations of GIN does RUM enable us to transcend?

First, «tsvector» data type contains not only lexemes, but also information on their positions inside the document. As we observed last time, GIN index does not store this information. For this reason, operations to search for phrases, which appeared in version 9.6, are supported by GIN index inefficiently and have to access the original data for recheck.

Second, search systems usually return the results sorted by relevance (whatever that means). We can use ranking functions «ts_rank» and «ts_rank_cd» to this end, but they have to be computed for each row of the result, which is certainly slow.

To a first approximation, RUM access method can be considered as GIN that additionally stores position information and can return the results in a needed order (like GiST can return nearest neighbors). Let's move step by step.
Read more →
Total votes 10: ↑10 and ↓0 +10
Comments 2

Indexes in PostgreSQL — 7 (GIN)

Reading time 18 min
Views 24K
We have already got acquainted with PostgreSQL indexing engine and the interface of access methods and discussed hash indexes, B-trees, as well as GiST and SP-GiST indexes. And this article will feature GIN index.

GIN


«Gin?.. Gin is, it seems, such an American liquor?..»
«I'm not a drink, oh, inquisitive boy!» again the old man flared up, again he realized himself and again took himself in hand. «I am not a drink, but a powerful and undaunted spirit, and there is no such magic in the world that I would not be able to do.»

— Lazar Lagin, «Old Khottabych».

Gin stands for Generalized Inverted Index and should be considered as a genie, not a drink.
README
Read more →
Total votes 17: ↑16 and ↓1 +15
Comments 0

Indexes in PostgreSQL — 6 (SP-GiST)

Reading time 11 min
Views 8.3K
We've already discussed PostgreSQL indexing engine, the interface of access methods, and three methods: hash index, B-tree, and GiST. In this article, we will describe SP-GiST.

SP-GiST


First, a few words about this name. The «GiST» part alludes to some similarity with the same-name access method. The similarity does exist: both are generalized search trees that provide a framework for building various access methods.

«SP» stands for space partitioning. The space here is often just what we are used to call a space, for example, a two-dimensional plane. But we will see that any search space is meant, that is, actually any value domain.

SP-GiST is suitable for structures where the space can be recursively split into non-intersecting areas. This class comprises quadtrees, k-dimensional trees (k-D trees), and radix trees.
Read more →
Total votes 12: ↑12 and ↓0 +12
Comments 0

Information

Website
www.postgrespro.ru
Registered
Founded
Employees
201–500 employees
Location
Россия
Representative
Иван Панченко