
In this article, we’ll explore what vector search is, what problems it solves, and how the pgpro_vector
extension for Postgres Pro brings powerful vector capabilities directly into a relational database — no need for separate specialized systems.
In this article, we’ll explore what vector search is, what problems it solves, and how the pgpro_vector
extension for Postgres Pro brings powerful vector capabilities directly into a relational database — no need for separate specialized systems.
Today, I want to talk about one of those sneaky tricks that can help speed up query execution. Specifically, this is about reordering conditions in WHERE clauses, JOINs, HAVING clauses, and so on.
The idea is simple: if a condition in an AND
chain turns out to be false, or if one in an OR
chain turns out to be true, there's no need to evaluate the rest. That means saved CPU cycles — and sometimes, a lot of them. Let’s break this down.
Here I describe the results of developing a PostgreSQL extension I built just out of curiosity. Its purpose is to automatically manage extended column statistics. The idea came to me while finishing work on another "smart" query-driven product for improving PostgreSQL planning quality. I realized that the current architecture of PostgreSQL isn’t quite ready for fully autonomous operation — automatic detection of bad plans and adaptive optimizer tuning. So why not try the other way around and build an autonomous data-driven assistant?
How to compare the efficiency of SQL query plans? “Measure the execution time, of course!” — an experienced reader would say. And they would be absolutely right: from a practical perspective, the more efficient DBMS is the one that delivers higher TPS. However, sometimes we need to design a system that doesn't exist yet or predict behavior under loads that haven't occurred yet. In such cases, we need a characteristic that allows us to perform a qualitative analysis of a plan or compare two plans. This post is dedicated to one such characteristic — the number of data pages read.
For years, we’ve studied Oracle to make PostgreSQL a more migration-friendly option. We introduced tools similar to SQL profile and SQL plan baseline as AQO and sr_plan extensions. In some cases, PostgreSQL even outperforms Oracle, especially in automatic re-optimization.
Migrations from Oracle to PostgreSQL are usually smooth performance-wise, and we’ve even developed session variable extensions to make the transition easier. While many enterprise-only features exist, PostgreSQL often integrates popular solutions directly into the core.
Hi everyone, I’m Alexey. I’m a big fan of observability, and in this post, I want to share something I’ve been working on — the pgpro-otel-collector
.
TL;DR: pgpro-otel-collector
is an OpenTelemetry collector (aka monitoring agent) tailored for gathering Postgres metrics and logs — brought to you by PostgresPro.
If you've ever run multiple instances of PostgreSQL or other software on a single machine (whether virtual or physical), you've probably encountered the "noisy neighbor" effect — when instances disrupted each other. So, how do you make them get along? We’ve got the answer!
If you work with PostgreSQL, you've likely run into performance issues at some point — especially as your database grows. Things may have been running smoothly at first, but as your client database expanded, queries started slowing down. Sound familiar? Here's a guide to help you identify and fix problematic queries, so you can get your PostgreSQL database running at peak performance again.
Imagine a familiar situation: it’s Monday morning, tasks are piling up, and you need to quickly spin up a new service using Postgres Pro. Or maybe you’ve just upgraded your database server over the weekend — added more CPUs, more RAM.
Here’s how to get your database tuned and ready to make the most of the new hardware and workload, without wasting time.
DBAs often struggle to identify the most resource-hungry processes that degrade system performance. Back in 2017, DBA — and now Postgres Professional engineer — Andrey Zubkov faced the same challenge. This led him to develop pg_profile
for PostgreSQL, which has since evolved into pgpro_pwr
.
In this article, we’ll dive into strategic database monitoring and show you how to pinpoint bottlenecks in your databases using our tools.
According to Gartner, natural language queries will replace SQL as early as 2026.
While Gartner's prediction may be optimistic, the shift toward natural language interfaces for databases is inevitable. The timeline may vary, but the transition itself is a certainty.
Storing all your data in one place might seem convenient, but it’s often impractical. High costs, database scalability limits, and complex administration create major hurdles. That’s why smart businesses rely on Information Lifecycle Management (ILM) — a structured approach that automates data management based on policies and best practices.
With Postgres Pro Enterprise 17, ILM is now easier than ever, thanks to the pgpro_ilm extension. This tool enables seamless data tiering, much like Oracle's ILM functionality. Let’s dive into the challenges of managing large databases, how ILM solves them, and how you can implement it in Postgres Pro Enterprise 17.
While pg_probackup 3 is still in the works and not yet available to the public, let’s dive into what’s new under the hood. There’s a lot to unpack — from a completely reimagined application architecture to long-awaited features and seamless integration with other tools.
For many years, the PostgreSQL community was skeptical about using this database management system (DBMS) for high-transaction environments. While PostgreSQL worked well for lab tests, mid-tier web applications, and smaller backend systems, it was believed that for heavy transactional loads, you’d need an expensive DBMS designed specifically for such purposes. As a result, PostgreSQL wasn’t particularly developed in that direction, leaving a range of issues unanswered.
However, the reality has turned out differently. More and more of our clients are encountering problems that stem from this mindset. For example, in the global PostgreSQL community, it’s considered that 64 cores is the maximum size of a server where PostgreSQL can run effectively. But we’re now seeing that this is becoming a minimum typical configuration. One particular bottleneck that has emerged is the transaction counter, and this is a far more interesting issue. So, let’s dive into what the problem is, how we solved it, and what the international community thinks about it.
Postgres Pro Enterprise 17 introduces major improvements in performance and scalability. The key feature of this new release is the proxima extension, which combines connection pooling, proxying, and load balancing within the database core. Developers also gain improved tools for managing message queues, optimizing queries, enhancing security, and utilizing smart data storage. Want to know how these and other features can impact your applications and simplify database administration?
This article provides a brief overview of the release, accompanied by the links to more detailed information.
Statistically, September CommitFests feature the fewest commits. Apparently, the version 18 CommitFest is an outlier. There are many accepted patches and many interesting new features to talk about.
If you missed the July CommitFest, get up to speed here: 2024-07.
This article is the first in the series about the upcoming PostgreSQL 18 release. Let us take a look at the features introduced in the July CommitFest.
Planner: Hash Right Semi Join support
Planner: materializing an internal row set for parallel nested loop join
Planner support functions for generate_series
EXPLAIN (analyze): statistics for Parallel Bitmap Heap Scan node workers
Functions min and max for composite types
Parameter names for regexp* functions
Debug mode in pgbench
pg_get_backend_memory_contexts: column path instead of parent, new column type
Function pg_get_acl
pg_upgrade: pg_dump optimization
Predefined role pg_signal_autovacuum_worker
Since the PostgreSQL 17 RC1 came out, we are on a home run towards the official PostgreSQL release, scheduled for September 26, 2024.
Let's take a look at the patches that came in during the March CommitFest.