Pull to refresh

Tech Administration

Show first
Rating limit
Level of difficulty

Сray: Resurrection

Level of difficultyMedium
Reading time12 min
Reach and readers3.6K

There are things in the IT industry whose even existence has become a beautiful myth.

The knowledge described in this article is extremely rare, as it has previously been held by individuals with an academic degree, special training, and, most importantly, access to the necessary equipment.

Read more

CRM, Regulatory Constraints, and Automation: How We Engineered a Reliable Release Process

Level of difficultyMedium
Reading time14 min
Reach and readers3.6K

How we transformed stressful manual releases into a dependable, one-click process using GitOps and automation. 50+ modules, auditors and regulators—in a single template that scaled across more than 30 services. No magic, an engineering discipline.

Read more

A VPS server for the price of a bag of chips: a review of the cheapest plans from Russian hosting providers

Level of difficultyEasy
Reading time7 min
Reach and readers2.2K

Hello, Habr! I once conducted a small test of virtual machines from various hosting providers and compared them with each other — it turns out that five years have passed since then. And in that test, the conditions for all servers were the same, as similar configurations were being tested.

Today I'd like to talk about how the cheapest (in the price range of 100 to 300 rubles) offers from popular hosting providers behave.

Read more

Index page pruning in PostgreSQL

Level of difficultyEasy
Reading time11 min
Reach and readers562

Page pruning (HOT cleanup) is an optimization allowing to efficiently remove old row versions (tuples) from table blocks. The freed space will be reused for new row versions. The only space occupied by row versions beyond the database's xmin horizon is reclaimed. This article examines the algorithm behind a similar optimization for indexes. If the xmin horizon is held back - by a long-running query or transaction - neither page pruning nor VACUUM can reclaim space, forcing new row versions to be inserted into different blocks. With the standard pgbench test, we demonstrate how significantly performance can degrade when the database horizon is held back, and we analyze the underlying causes. 

Read more

A brief overview of XHTTP for VLESS: what, why, and how

Level of difficultyMedium
Reading time6 min
Reach and readers4.5K

We were asked to talk about the protocol technology XHTTP in the context of XRay, VLESS, and others. You asked for it, so here it is!

First, a bit of history. The classic use of VLESS and similar proxy protocols (including with XTLS-Reality) involves the client connecting directly to a proxy server running on some VPS. However, in many countries (including Russia), entire subnets of popular hosting providers have started to be blocked (or throttled), and in other countries, censors have begun to monitor connections to 'single' addresses with high traffic volumes. Therefore, for a long time, ideas of connecting to proxy servers through CDNs (Content Delivery Networks) have been considered and tested. Most often, the websocket transport was used for this, but this option has two major drawbacks: it has one characteristic feature (I won't specify it here to not make the RKN's job easier), and secondly, the number of CDNs that support websocket proxying is not that large, and it would be desirable to be able to proxy through those that do not.

Therefore, first in the well-known Tor project for bridges, the meek transport was invented, which allowed data to be transmitted using numerous HTTP request-response pairs, thus allowing connections to bridges (proxies) through any CDN. A little later, the same transport was implemented in the briefly resurrected V2Ray. But meek has two very significant drawbacks that stem from its operating principle: the speed is very low (in fact, we have half-duplex transmission and huge overhead from constant requests-responses), and due to the huge number of GET/POST requests every second, free CDNs can quickly kick us out, and paid ones can present a hefty bill.

Read more

Installing the NFQWS network packet modification program on a Keenetic router

Level of difficultyMedium
Reading time13 min
Reach and readers2.5K

Hello, Habr!

Today we'll look at how to install the network packet modification utility Zapret on Keenetic routers. Unlike using it on specific devices, installing it on a router allows you to process traffic from all devices connected to your home local network (PCs, smartphones, and smart TVs).

Read more

BlueVein: How I spent a month to avoid wasting 56 hours a year reconnecting Bluetooth devices in dual-boot

Level of difficultyMedium
Reading time5 min
Reach and readers6.5K

Do you switch between Linux and Windows in dual-boot? Then you're probably familiar with this problem: you have to reconnect all your Bluetooth devices every time. Headphones, mouse, keyboard, gamepad — everything has to be reconnected.

It's scary to even think about it:
3 devices × 90 seconds × 3 switches per day × 250 days = 56 hours wasted per year.

I spent a month solving this problem and wrote BlueVein — a utility for automatically synchronizing Bluetooth keys between operating systems.

Read more

Breaking data for fun

Level of difficultyEasy
Reading time8 min
Reach and readers6.9K

Throughout their careers engineers build systems that protect data and guard it against corruption. But what if the right approach is the opposite: deliberately corrupting data, generating it out of thin air, and creating forgeries indistinguishable from the real thing?

Maksim Gramin, systems analyst at Postgres Professional, explains why creating fake data is a critical skill for testing, security, and development — and how to do it properly without turning your database into a junkyard of “John Smith” entries.

Read more

OAuth 2.0 authorization in PostgreSQL using Keycloak as an example

Level of difficultyEasy
Reading time27 min
Reach and readers11K

Hello, Habr! We continue the series of articles on the innovations of the Tantor Postgres 17.5.0 DBMS, and today we will talk about authorization support via OAuth 2.0 Device Authorization Flow is a modern and secure access method that allows applications to request access to PostgreSQL on behalf of the user through an external identification and access control provider, such as Keycloak, which is especially convenient for cloud environments and microservice architectures (the feature will also be available in PostgreSQL 18). In this article, we'll take a step-by-step look at configuring OAuth authorization in PostgreSQL using Keycloak: configure Keycloak, prepare PostgreSQL, write an OAuth token validator in PostgreSQL, and verify successful authorization via psql using Device Flow.

Read more

Quitting the Samurai Path: How EXANTE Is Changing Its Infrastructure, or How We Failed at Going Cloud Native

Level of difficultyEasy
Reading time5 min
Reach and readers21K

From hype to strategy: how EXANTE redefined Cloud Native after painful Kubernetes mistakes, lessons learned, and building a more resilient infrastructure

Read more

The Russian trace in the history of the PostgreSQL logo

Level of difficultyEasy
Reading time7 min
Reach and readers24K

The story of the PostgreSQL logo was shared by Oleg Bartunov, CEO of Postgres Professional, who personally witnessed these events and preserved an archive of correspondence and visual design development for the database system.

Our iconic PostgreSQL logo — our beloved “Slonik” — has come a long way. Soon, it will turn thirty! Over the years, its story has gathered plenty of myths and speculation. As a veteran of the community, I decided it’s time to set the record straight, relying on the memories of those who were there. Who actually came up with it? Why an elephant? How did it end up in a diamond, and how did the Russian word “slonik” become a part of the global IT vocabulary?

Read more

How to load test PostgreSQL database and not miss anything

Level of difficultyMedium
Reading time14 min
Reach and readers16K

During load testing of Tantor Postgres databases or other PostgreSQL-based databases using the standard tool pgbench, specialists often encounter non-representative results and the need for repeated tests due to the fact that details of the environment (such as DBMS configuration, server characteristics, PostgreSQL versions) are not recorded. In this article we are going to review author's pg_perfbench, which is designed to address this issue. It ensures that scenarios are repeatable, prevents the loss of important data, and streamlines result comparison by registering all parameters in a single template. It also automatically launches pgbench with TPC-B load generation, collects all metadata on the testing environment, and generates a structured report.

Read more

My way of a full system backup without external software: incremental rsync plus btrfs with zstd compression

Level of difficultyMedium
Reading time3 min
Reach and readers9.1K

The repo of this script is https://gitlab.com/vitaly‑zdanevich/full‑backup/‑/blob/master/full‑backup.sh

Incremental with hard links means that if a file is not changed, on the next backup it will link to the same underlying data, like deduplication. Hard links — its usual files.

Also, this script ignores .gitignore of every folder.

Run this script from another system.

Read more

We’ve learned how to migrate databases from Oracle to Postgres Pro at 41 TB/day

Level of difficultyEasy
Reading time3 min
Reach and readers9K

41 TB/day from Oracle to Postgres Pro without stopping the source system — not theory, but numbers from our latest tests. We broke the migration into three stages: fast initial load, CDC from redo logs, and validation, and wrapped them into ProGate. In this article, we’ll explain how the pipeline works, why we chose Go, and where the bottlenecks hide.

Read more

Getting started with pgpro-otel-collector

Level of difficultyEasy
Reading time4 min
Reach and readers11K

Now that pgpro-otel-collector has had its public release, I’m excited to start sharing more about the tool — and to kick things off, I’m launching a blog series focused entirely on the Collector.

The first post is an intro — a practical guide to installing, configuring, and launching the collector. We’ll also take our first look at what kind of data the collector exposes, starting with good old Postgres metrics.

Read more

Getting to know PPEM 2

Level of difficultyEasy
Reading time7 min
Reach and readers7.4K

Postgres Pro recently announced the release of Enterprise Manager 2, commonly known as PPEM.

In short, PPEM is an administration tool designed for managing and monitoring Postgres databases. Its primary goal is to assist DBAs in their daily tasks and automate routine operations. In this article, I'll take a closer look at what PPEM has to offer. My name is Alexey, and I'm part of the PPEM development team.

Read more

n8n Local Install Tutorial (CLI + Docker)

Level of difficultyEasy
Reading time3 min
Reach and readers22K

n8n is a powerful, extendable workflow automation tool that allows you to connect different applications and services. Running it on your local machine gives you complete control over your data and workflows, which can be done on Windows, Mac, or Linux systems. This tutorial covers the two primary methods for local installation: using Docker and using Node.js (npm). If you are interested, then read this article until the end. :)

Read more

Redundant statistics slow down your Postgres? Try sampling in pg_stat_statements

Level of difficultyMedium
Reading time11 min
Reach and readers4.9K

pg_stat_statements is the standard PostgreSQL extension used to track query statistics: number of executions, total and average execution time, number of returned rows, and other metrics. This information allows to analyze query behavior over time, identify problem areas, and make informed optimization decisions. However, in systems with high contention, pg_stat_statements itself can become a bottleneck and cause performance drops. In this article, we will analyze in which scenarios the extension becomes a source of problems, how sampling is structured, and in which cases its application can reduce overhead.

Read more
1