Pull to refresh
78.77

SQL *

Domain-specific languageused in programming and designed for managing data held in a relational database management system, or for stream processing in a relational data stream management system

Show first
Rating limit
Level of difficulty

MVCC in PostgreSQL-4. Snapshots

Reading time9 min
Views6.6K
After having discussed isolation problems and having made a digression regarding the low-level data structure, last time we explored row versions and observed how different operations changed tuple header fields.

Now we will look at how consistent data snapshots are obtained from tuples.

What is a data snapshot?


Data pages can physically contain several versions of the same row. But each transaction must see only one (or none) version of each row, so that all of them make up a consistent picture of the data (in the sense of ACID) as of a certain point in time.

Isolation in PosgreSQL is based on snapshots: each transaction works with its own data snapshot, which «contains» data that were committed before the moment the snapshot was created and does not «contain» data that were not committed by that moment yet. We've already seen that although the resulting isolation appears stricter than required by the standard, it still has anomalies.
Read more →

MVCC in PostgreSQL-3. Row Versions

Reading time13 min
Views8.5K
Well, we've already discussed isolation and made a digression regarding the low-level data structure. And we've finally reached the most fascinating thing, that is, row versions (tuples).

Tuple header


As already mentioned, several versions of each row can be simultaneously available in the database. And we need to somehow distinguish one version from another one. To this end, each version is labeled with its effective «time» (xmin) and expiration «time» (xmax). Quotation marks denote that a special incrementing counter is used rather than the time itself. And this counter is the transaction identifier.

(As usual, in reality this is more complicated: the transaction ID cannot always increment due to a limited bit depth of the counter. But we will explore more details of this when our discussion reaches freezing.)
Read more →

MVCC in PostgreSQL-2. Forks, files, pages

Reading time11 min
Views5.7K
Last time we talked about data consistency, looked at the difference between levels of transaction isolation from the point of view of the user and figured out why this is important to know. Now we are starting to explore how PostgreSQL implements snapshot isolation and multiversion concurrency.

In this article, we will look at how data is physically laid out in files and pages. This takes us away from discussing isolation, but such a digression is necessary to understand what follows. We will need to figure out how the data storage is organized at a low level.

Relations


If you look inside tables and indexes, it turns out that they are organized in a similar way. Both are database objects that contain some data consisting of rows.

There is no doubt that a table consists of rows, but this is less obvious for an index. However, imagine a B-tree: it consists of nodes that contain indexed values and references to other nodes or table rows. It's these nodes that can be considered index rows, and in fact, they are.

Actually, a few more objects are organized in a similar way: sequences (essentially single-row tables) and materialized views (essentially, tables that remember the query). And there are also regular views, which do not store data themselves, but are in all other senses similar to tables.

All these objects in PostgreSQL are called the common word relation. This word is extremely improper because it is a term from the relational theory. You can draw a parallel between a relation and a table (view), but certainly not between a relation and an index. But it just so happened: the academic origin of PostgreSQL manifests itself. It seems to me that it's tables and views that were called so first, and the rest swelled over time.
Read more →

Quintet instead of Byte — data storage and retrieval approach

Reading time13 min
Views1.7K
Quintet is a way to present atomic pieces of data indicating their role in the business area. Quintets can describe any item, while each of them contains complete information about itself and its relations to other quintets. Such description does not depend on the platform used. Its objective is to simplify the storage of data and to improve the visibility of their presentation.



We will discuss an approach to storing and processing information and share some thoughts on creating a development platform in this new paradigm. What for? To develop faster and in shorter iterations: sketch your project, make sure it is what you thought of, refine it, and then keep refining the result.

The quintet has properties: type, value, parent, and order among the peers. Thus, there are 5 components including the identifier. This is the simplest universal form to record information, a new standard that could potentially fit any programming demands. Quintets are stored in the file system of the unified structure, in a continuous homogeneous indexed bulk of data. The quintet data model — a data model that describes any data structure as a single interconnected list of basic types and terms based on them (metadata), as well as instances of objects stored according to this metadata (data).
Read more →

MVCC in PostgreSQL-1. Isolation

Reading time24 min
Views12K
Hello, Habr! With this article I start a set of series (or a series of sets? — In a word, the idea is grandiose) about the internal structure of PostgreSQL.

The material will be based on training courses (in Russian) on administration that Pavel pluzanov and I are creating. Not everyone likes to watch video (I definitely do not), and reading slides, even with comments, is no good at all.

Unfortunately, the only course available in English at the moment is 2-Day Introduction to PostgreSQL 11.

Of course, the articles will not be exactly the same as the content of the courses. I will talk only about how everything is organized, omitting the administration itself, but I will try to do it in more detail and more thoroughly. And I believe that the knowledge like this is as useful to an application developer as it is to an administrator.

I will target those who already have some experience in using PostgreSQL and at least in general understand what is what. The text will be too difficult for beginners. For example, I will not say a word about how to install PostgreSQL and run psql.

The stuff in question does not vary much from version to version, but I will use the current, 11th vanilla PostgreSQL.

The first series deals with issues related to isolation and multiversion concurrency, and the plan of the series is as follows:

  1. Isolation as understood by the standard and PostgreSQL (this article).
  2. Forks, files, pages — what is happening at the physical level.
  3. Row versions, virtual transactions and subtransactions.
  4. Data snapshots and the visibility of row versions; the event horizon.
  5. In-page vacuum and HOT updates.
  6. Normal vacuum.
  7. Autovacuum.
  8. Transaction id wraparound and freezing.

Off we go!

And before we start, I would like to thank Elena Indrupskaya for translating the articles to English.

Read more →

How I prepared for and passed the Oracle Database SQL Certification (1Z0-071)

Reading time9 min
Views21K

Why did I write this article?


image

When I was preparing for Java 8 OCA and OCP I found a lot of useful articles about subjects on Habr that helped me to choose the optimal path and save a considerable amount of time.


When I started preparing for OCA Oracle Database SQL (1Z0-071) I didn’t find any materials on Habr about this matter and found there to be limited information available on the Internet. Because of this I decided to write a complete guide in order to help others who are interested in this certification to help them save time and successfully pass what I consider to be a pretty hard exam.

Read more →

How to receive data from Google Analytics using R in Microsoft SQL Server

Reading time9 min
Views3.2K

In this article I want to show in detail how you can use R in Microsoft SQL Server to get data from Google Analytics (and generally from any API).


The task — we have MS SQL server and we want to receive data in DWH by API


We will use googleAnalyticsR package to connect to Google Analytics (GA).


This package is chosen as an example due to its popularity. You can use another package, for example: RGoogleAnalytic.
Approaches to problem solving will be the same.

Read more →

Testing SQL Server code with tSQLt

Reading time20 min
Views2.4K
FYI: this article is an expanded version of my talk at SQA Days #25.

Based on my experience with colleagues, I can state: DB code testing is not a widely spread practice. This can be potentially dangerous. DB logic is written by human beings just like all other «usual» code. So, there can be failures which can cause negative consequences for a product, business or users. Whether these are stored procedures helping backend or it is ETL modifying data in a warehouse — there is always a risk and testing helps to decrease it. I want to tell you what tSQLt is and how it helps us to test DB code.

Read more →

SQL Index Manager – a long story about SQL Server, grave digging and index maintenance

Reading time14 min
Views2.7K
Every now and then we create our own problems with our own hands… with our vision of the world… with our inaction… with our laziness… and with our fears. As a result, it seems to become very convenient to swim in the public flow of sewage patterns… because it is warm and fun, and the rest does not matter – we can smell round. But after a fail comes the realization of the simple truth – instead of generating an endless stream of causes, self-pity and self-justification, it is enough just to do what you consider the most important for yourself. This will be the starting point for your new reality.

For me, the written below is just such a starting point. The way is expected to be lingering…
Let's go?

Indexes in PostgreSQL — 10 (Bloom)

Reading time11 min
Views7.3K
In the previous articles we discussed PostgreSQL indexing engine and the interface of access methods, as well as hash indexes, B-trees, GiST, SP-GiST, GIN, RUM, and BRIN. But we still need to look at Bloom indexes.

Bloom


General concept


A classical Bloom filter is a data structure that enables us to quickly check membership of an element in a set. The filter is highly compact, but allows false positives: it can mistakenly consider an element to be a member of a set (false positive), but it is not permitted to consider an element of a set not to be a member (false negative).

The filter is an array of $m$ bits (also called a signature) that is initially filled with zeros. $k$ different hash functions are chosen that map any element of the set to $k$ bits of the signature. To add an element to the set, we need to set each of these bits in the signature to one. Consequently, if all the bits corresponding to an element are set to one, the element can be a member of the set, but if at least one bit equals zero, the element is not in the set for sure.

In the case of a DBMS, we actually have $N$ separate filters built for each index row. As a rule, several fields are included in the index, and it's values of these fields that compose the set of elements for each row.

By choosing the length of the signature $m$, we can find a trade-off between the index size and the probability of false positives. The application area for Bloom index is large, considerably «wide» tables to be queried using filters on each of the fields. This access method, like BRIN, can be regarded as an accelerator of sequential scan: all the matches found by the index must be rechecked with the table, but there is a chance to avoid considering most of the rows at all.
Read more →

Indexes in PostgreSQL — 9 (BRIN)

Reading time18 min
Views9.1K
In the previous articles we discussed PostgreSQL indexing engine, the interface of access methods, and the following methods: hash indexes, B-trees, GiST, SP-GiST, GIN, and RUM. The topic of this article is BRIN indexes.

BRIN


General concept


Unlike indexes with which we've already got acquainted, the idea of BRIN is to avoid looking through definitely unsuited rows rather than quickly find the matching ones. This is always an inaccurate index: it does not contain TIDs of table rows at all.

Simplistically, BRIN works fine for columns where values correlate with their physical location in the table. In other words, if a query without ORDER BY clause returns the column values virtually in the increasing or decreasing order (and there are no indexes on that column).

This access method was created in scope of Axle, the European project for extremely large analytical databases, with an eye on tables that are several terabyte or dozens of terabytes large. An important feature of BRIN that enables us to create indexes on such tables is a small size and minimal overhead costs of maintenance.

This works as follows. The table is split into ranges that are several pages large (or several blocks large, which is the same) — hence the name: Block Range Index, BRIN. The index stores summary information on the data in each range. As a rule, this is the minimal and maximal values, but it happens to be different, as shown further. Assume that a query is performed that contains the condition for a column; if the sought values do not get into the interval, the whole range can be skipped; but if they do get, all rows in all blocks will have to be looked through to choose the matching ones among them.

It will not be a mistake to treat BRIN not as an index, but as an accelerator of sequential scan. We can regard BRIN as an alternative to partitioning if we consider each range as a «virtual» partition.

Now let's discuss the structure of the index in more detail.
Read more →

Indexes in PostgreSQL — 8 (RUM)

Reading time11 min
Views8.8K
We have already discussed PostgreSQL indexing engine, the interface of access methods, and main access methods, such as: hash indexes, B-trees, GiST, SP-GiST, and GIN. In this article, we will watch how gin turns into rum.

RUM


Although the authors claim that gin is a powerful genie, the theme of drinks has eventually won: next-generation GIN has been called RUM.

This access method expands the concept that underlies GIN and enables us to perform full-text search even faster. In this series of articles, this is the only method that is not included in a standard PostgreSQL delivery and is an external extension. Several installation options are available for it:

  • Take «yum» or «apt» package from the PGDG repository. For example, if you installed PostgreSQL from «postgresql-10» package, also install «postgresql-10-rum».
  • Build from source code on github and install on your own (the instruction is there as well).
  • Use as a part of Postgres Pro Enterprise (or at least read the documentation from there).

Limitations of GIN


What limitations of GIN does RUM enable us to transcend?

First, «tsvector» data type contains not only lexemes, but also information on their positions inside the document. As we observed last time, GIN index does not store this information. For this reason, operations to search for phrases, which appeared in version 9.6, are supported by GIN index inefficiently and have to access the original data for recheck.

Second, search systems usually return the results sorted by relevance (whatever that means). We can use ranking functions «ts_rank» and «ts_rank_cd» to this end, but they have to be computed for each row of the result, which is certainly slow.

To a first approximation, RUM access method can be considered as GIN that additionally stores position information and can return the results in a needed order (like GiST can return nearest neighbors). Let's move step by step.
Read more →

Indexes in PostgreSQL — 7 (GIN)

Reading time18 min
Views25K
We have already got acquainted with PostgreSQL indexing engine and the interface of access methods and discussed hash indexes, B-trees, as well as GiST and SP-GiST indexes. And this article will feature GIN index.

GIN


«Gin?.. Gin is, it seems, such an American liquor?..»
«I'm not a drink, oh, inquisitive boy!» again the old man flared up, again he realized himself and again took himself in hand. «I am not a drink, but a powerful and undaunted spirit, and there is no such magic in the world that I would not be able to do.»

— Lazar Lagin, «Old Khottabych».

Gin stands for Generalized Inverted Index and should be considered as a genie, not a drink.
README
Read more →

Indexes in PostgreSQL — 6 (SP-GiST)

Reading time11 min
Views9K
We've already discussed PostgreSQL indexing engine, the interface of access methods, and three methods: hash index, B-tree, and GiST. In this article, we will describe SP-GiST.

SP-GiST


First, a few words about this name. The «GiST» part alludes to some similarity with the same-name access method. The similarity does exist: both are generalized search trees that provide a framework for building various access methods.

«SP» stands for space partitioning. The space here is often just what we are used to call a space, for example, a two-dimensional plane. But we will see that any search space is meant, that is, actually any value domain.

SP-GiST is suitable for structures where the space can be recursively split into non-intersecting areas. This class comprises quadtrees, k-dimensional trees (k-D trees), and radix trees.
Read more →

Indexes in PostgreSQL — 5 (GiST)

Reading time22 min
Views10K
In the previous articles, we discussed PostgreSQL indexing engine, the interface of access methods, and two access methods: hash index and B-tree. In this article, we will describe GiST indexes.

GiST


GiST is an abbreviation of «generalized search tree». This is a balanced search tree, just like «b-tree» discussed earlier.

What is the difference? «btree» index is strictly connected to the comparison semantics: support of «greater», «less», and «equal» operators is all it is capable of (but very capable!) However, modern databases store data types for which these operators just make no sense: geodata, text documents, images,…

GiST index method comes to our aid for these data types. It permits defining a rule to distribute data of an arbitrary type across a balanced tree and a method to use this representation for access by some operator. For example, GiST index can «accommodate» R-tree for spatial data with support of relative position operators (located on the left, on the right, contains, etc.) or RD-tree for sets with support of intersection or inclusion operators.

Thanks to extensibility, a totally new method can be created from scratch in PostgreSQL: to this end, an interface with the indexing engine must be implemented. But this requires premeditation of not only the indexing logic, but also mapping data structures to pages, efficient implementation of locks, and support of a write-ahead log. All this assumes high developer skills and a large human effort. GiST simplifies the task by taking over low-level problems and offering its own interface: several functions pertaining not to techniques, but to the application domain. In this sense, we can regard GiST as a framework for building new access methods.
Read more →

Indexes in PostgreSQL — 4 (Btree)

Reading time17 min
Views22K
We've already discussed PostgreSQL indexing engine and interface of access methods, as well as hash index, one of access methods. We will now consider B-tree, the most traditional and widely used index. This article is large, so be patient.

Btree


Structure


B-tree index type, implemented as «btree» access method, is suitable for data that can be sorted. In other words, «greater», «greater or equal», «less», «less or equal», and «equal» operators must be defined for the data type. Note that the same data can sometimes be sorted differently, which takes us back to the concept of operator family.
Read more →

Indexes in PostgreSQL — 3 (Hash)

Reading time7 min
Views18K
The first article described PostgreSQL indexing engine, the second one dealt with the interface of access methods, and now we are ready to discuss specific types of indexes. Let's start with hash index.

Hash


Structure


General theory


Plenty of modern programming languages include hash tables as the base data type. On the outside, a hash table looks like a regular array that is indexed with any data type (for example, string) rather than with an integer number. Hash index in PostgreSQL is structured in a similar way. How does this work?

As a rule, data types have very large ranges of permissible values: how many different strings can we potentially envisage in a column of type «text»? At the same time, how many different values are actually stored in a text column of some table? Usually, not so many of them.

The idea of hashing is to associate a small number (from 0 to N−1, N values in total) with a value of any data type. Association like this is called a hash function. The number obtained can be used as an index of a regular array where references to table rows (TIDs) will be stored. Elements of this array are called hash table buckets — one bucket can store several TIDs if the same indexed value appears in different rows.

The more uniformly a hash function distributes source values by buckets, the better it is. But even a good hash function will sometimes produce equal results for different source values — this is called a collision. So, one bucket can store TIDs corresponding to different keys, and therefore, TIDs obtained from the index need to be rechecked.
Read more →

Indexes in PostgreSQL — 2

Reading time5 min
Views8K

Interface


In the first article, we've mentioned that an access method must provide information about itself. Let's look into the structure of the access method interface.

Properties


All properties of access methods are stored in the «pg_am» table («am» stands for access method). We can also get a list of available methods from this same table:

postgres=# select amname from pg_am;
 amname
--------
 btree
 hash
 gist
 gin
 spgist
 brin
(6 rows)

Although sequential scan can rightfully be referred to access methods, it is not on this list for historical reasons.

In PostgreSQL versions 9.5 and lower, each property was represented with a separate field of the «pg_am» table. Starting with version 9.6, properties are queried with special functions and are separated into several layers:

  • Access method properties — «pg_indexam_has_property»
  • Properties of a specific index — «pg_index_has_property»
  • Properties of individual columns of the index — «pg_index_column_has_property»

The access method layer and index layer are separated with an eye towards the future: as of now, all indexes based on one access method will always have the same properties.
Read more →

Indexes in PostgreSQL — 1

Reading time13 min
Views25K

Introduction


This series of articles is largely concerned with indexes in PostgreSQL.

Any subject can be considered from different perspectives. We will discuss matters that should interest an application developer who uses DBMS: what indexes are available, why there are so many different types of them, and how to use them to speed up queries. The topic can probably be covered in fewer words, but in secrecy we hope for a curious developer, who is also interested in details of the internals, especially since understanding of such details allows you to not only defer to other's judgement, but also make conclusions of your own.

Development of new types of indexes is outside the scope. This requires knowledge of the C programming language and pertains to the expertise of a system programmer rather than an application developer. For the same reason we almost won't discuss programming interfaces, but will focus only on what matters for working with ready-to-use indexes.

In this article we will discuss the distribution of responsibilities between the general indexing engine related to the DBMS core and individual index access methods, which PostgreSQL enables us to add as extensions. In the next article we will discuss the interface of the access method and critical concepts such as classes and operator families. After that long but necessary introduction we will consider details of the structure and application of different types of indexes: Hash, B-tree, GiST, SP-GiST, GIN and RUM, BRIN, and Bloom.

Before we start, I would like to thank Elena Indrupskaya for translating the articles to English.
Things have changed a bit since the original publication. My comments on the current state of affairs are indicated like this.
Read more →

Authors' contribution