• Locks in PostgreSQL: 1. Relation-level locks

    • Translation
    The previous two series of articles covered isolation and multiversion concurrency control and logging.

    In this series, we will discuss locks.

    This series will consist of four articles:

    1. Relation-level locks (this article).
    2. Row-level locks.
    3. Locks on other objects and predicate locks.
    4. Locks in RAM.

    The material of all the articles is based on training courses on administration that Pavel pluzanov and I are creating (mostly in Russian, although one course is available in English), but does not repeat them verbatim and is intended for careful reading and self-experimenting.

    Many thanks to Elena Indrupskaya for the translation of these articles into English.



    General information on locks


    PostgreSQL has a wide variety of techniques that serve to lock something (or are at least called so). Therefore, I will first explain in the most general terms why locks are needed at all, what kinds of them are available and how they differ from one another. Then we will figure out what of this variety is used in PostgreSQL and only after that we will start discussing different kinds of locks in detail.
    Read more →
  • WAL in PostgreSQL: 4. Setup and Tuning

    • Translation
    So, we got acquainted with the structure of the buffer cache and in this context concluded that if all the RAM contents got lost due to failure, the write-ahead log (WAL) was required to recover. The size of the necessary WAL files and the recovery time are limited thanks to the checkpoint performed from time to time.

    In the previous articles we already reviewed quite a few important settings that anyway relate to WAL. In this article (being the last in this series) we will discuss problems of WAL setup that are unaddressed yet: WAL levels and their purpose, as well as the reliability and performance of write-ahead logging.

    WAL levels


    The main WAL task is to ensure recovery after a failure. But once we have to maintain the log anyway, we can also adapt it to other tasks by adding some more information to it. There are several logging levels. The wal_level parameter specifies the level, and each next level includes everything that gets into WAL of the preceding level plus something new.
    Read more →
  • WAL in PostgreSQL: 3. Checkpoint

    • Translation
    We already got acquainted with the structure of the buffer cache — one of the main objects of the shared memory — and concluded that to recover after failure when all the RAM contents get lost, the write-ahead log (WAL) must be maintained.

    The problem yet unaddressed, where we left off last time, is that we are unaware of where to start playing back WAL records during the recovery. To begin from the beginning, as the King from Lewis Caroll's Alice advised, is not an option: it is impossible to keep all the WAL records from the server start — this is potentially both a huge memory size and equally huge duration of the recovery. We need such a point that is gradually moving forward and that we can start the recovery at (and safely remove all the previous WAL records, accordingly). And this is the checkpoint, to be discussed below.

    Checkpoint


    What features must the checkpoint have? We must be sure that all the WAL records starting with the checkpoint will be applied to the pages flushed to disk. If it were not the case, during recovery, we could read from disk a version of the page that is too old, apply the WAL record to it and by doing so, irreversibly hurt the data.
    Read more →
  • WAL in PostgreSQL: 1. Buffer Cache

    • Translation
    The previous series addressed isolation and multiversion concurrency control, and now we start a new series: on write-ahead logging. To remind you, the material is based on training courses on administration that Pavel pluzanov and I are creating (mostly in Russian, although one course is available in English), but does not repeat them verbatim and is intended for careful reading and self-experimenting.

    This series will consist of four parts:


    Many thanks to Elena Indrupskaya for the translation of these articles into English.

    Why do we need write-ahead logging?


    Part of the data that a DBMS works with is stored in RAM and gets written to disk (or other nonvolatile storage) asynchronously, i. e., writes are postponed for some time. The more infrequently this happens the less is the input/output and the faster the system operates.

    But what will happen in case of failure, for example, power outage or an error in the code of the DBMS or operating system? All the contents of RAM will be lost, and only data written to disk will survive (disks are not immune to certain failures either, and only a backup copy can help if data on disk are affected). In general, it is possible to organize input/output in such a way that data on disk are always consistent, but this is complicated and not that much efficient (to my knowledge, only Firebird chose this option).

    Usually, and specifically in PostgreSQL, data written to disk appear to be inconsistent, and when recovering after failure, special actions are required to restore data consistency. Write-ahead logging (WAL) is just a feature that makes it possible.
    Read more →
  • MVCC in PostgreSQL-7. Autovacuum

    • Translation
    To remind you, we started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.

    Then we explored in-page vacuum (and HOT updates) and vacuum. Now we'll look into autovacuum.

    Autovacuum


    We've already mentioned that normally (i. e., when nothing holds the transaction horizon for a long time) VACUUM usually does its job. The problem is how often to call it.

    If we vacuum a changing table too rarely, its size will grow more than desired. Besides, a next vacuum operation may require several passes through indexes if too many changes were done.

    If we vacuum the table too often, the server will constantly do maintenance rather than useful work — and this is no good either.

    Note that launching VACUUM on schedule by no means resolves the issue because the workload can change with time. If the table starts to change more intensively, it must be vacuumed more often.

    Autovacuum is exactly the technique that enables us to launch vacuuming depending on how intensive the table changes are.
    Read more →
  • MVCC in PostgreSQL-8. Freezing

    • Translation
    We started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.

    Then we covered different vacuuming techniques: in-page vacuum (along with HOT updates), vacuum and autovacuum.

    Now we've reached the last topic of this series. We will talk on the transaction id wraparound and freezing.
    Read more →
  • MVCC in PostgreSQL-6. Vacuum

    • Translation
    We started with problems related to isolation, made a digression about low-level data structure, then discussed row versions and observed how data snapshots are obtained from row versions.

    Last time we talked about HOT updates and in-page vacuuming, and today we'll proceed to a well-known vacuum vulgaris. Really, so much has already been written about it that I can hardly add anything new, but the beauty of a full picture requires sacrifice. So keep patience.

    Vacuum


    What does vacuum do?


    In-page vacuum works fast, but frees only part of the space. It works within one table page and does not touch indexes.

    The basic, «normal» vacuum is done using the VACUUM command, and we will call it just «vacuum» (leaving «autovacuum» for a separate discussion).

    So, vacuum processes the entire table. It vacuums away not only dead tuples, but also references to them from all indexes.

    Vacuuming is concurrent with other activities in the system. The table and indexes can be used in a regular way both for reads and updates (however, concurrent execution of commands such as CREATE INDEX, ALTER TABLE and some others is impossible).

    Only those table pages are looked through where some activities took place. To detect them, the visibility map is used (to remind you, the map tracks those pages that contain pretty old tuples, which are visible in all data snapshots for sure). Only those pages are processed that are not tracked by the visibility map, and the map itself gets updated.

    The free space map also gets updated in the process to reflect the extra free space in the pages.
    Read more →