How to Setup Your Own Private Self-Hosted VPN Server in 30 Minutes (2026)

Build a self-hosted VPN for full control, better privacy, and reliable access, even in regions with censorship and DPI-based blocking

Build a self-hosted VPN for full control, better privacy, and reliable access, even in regions with censorship and DPI-based blocking

There are things in the IT industry whose even existence has become a beautiful myth.
The knowledge described in this article is extremely rare, as it has previously been held by individuals with an academic degree, special training, and, most importantly, access to the necessary equipment.

How we transformed stressful manual releases into a dependable, one-click process using GitOps and automation. 50+ modules, auditors and regulators—in a single template that scaled across more than 30 services. No magic, an engineering discipline.

Hello, Habr! I once conducted a small test of virtual machines from various hosting providers and compared them with each other — it turns out that five years have passed since then. And in that test, the conditions for all servers were the same, as similar configurations were being tested.
Today I'd like to talk about how the cheapest (in the price range of 100 to 300 rubles) offers from popular hosting providers behave.

Page pruning (HOT cleanup) is an optimization allowing to efficiently remove old row versions (tuples) from table blocks. The freed space will be reused for new row versions. The only space occupied by row versions beyond the database's xmin horizon is reclaimed. This article examines the algorithm behind a similar optimization for indexes. If the xmin horizon is held back - by a long-running query or transaction - neither page pruning nor VACUUM can reclaim space, forcing new row versions to be inserted into different blocks. With the standard pgbench test, we demonstrate how significantly performance can degrade when the database horizon is held back, and we analyze the underlying causes.
We were asked to talk about the protocol technology XHTTP in the context of XRay, VLESS, and others. You asked for it, so here it is!
First, a bit of history. The classic use of VLESS and similar proxy protocols (including with XTLS-Reality) involves the client connecting directly to a proxy server running on some VPS. However, in many countries (including Russia), entire subnets of popular hosting providers have started to be blocked (or throttled), and in other countries, censors have begun to monitor connections to 'single' addresses with high traffic volumes. Therefore, for a long time, ideas of connecting to proxy servers through CDNs (Content Delivery Networks) have been considered and tested. Most often, the websocket transport was used for this, but this option has two major drawbacks: it has one characteristic feature (I won't specify it here to not make the RKN's job easier), and secondly, the number of CDNs that support websocket proxying is not that large, and it would be desirable to be able to proxy through those that do not.
Therefore, first in the well-known Tor project for bridges, the meek transport was invented, which allowed data to be transmitted using numerous HTTP request-response pairs, thus allowing connections to bridges (proxies) through any CDN. A little later, the same transport was implemented in the briefly resurrected V2Ray. But meek has two very significant drawbacks that stem from its operating principle: the speed is very low (in fact, we have half-duplex transmission and huge overhead from constant requests-responses), and due to the huge number of GET/POST requests every second, free CDNs can quickly kick us out, and paid ones can present a hefty bill.
Hello, Habr!
Today we'll look at how to install the network packet modification utility Zapret on Keenetic routers. Unlike using it on specific devices, installing it on a router allows you to process traffic from all devices connected to your home local network (PCs, smartphones, and smart TVs).

Do you switch between Linux and Windows in dual-boot? Then you're probably familiar with this problem: you have to reconnect all your Bluetooth devices every time. Headphones, mouse, keyboard, gamepad — everything has to be reconnected.
It's scary to even think about it:
3 devices × 90 seconds × 3 switches per day × 250 days = 56 hours wasted per year.
I spent a month solving this problem and wrote BlueVein — a utility for automatically synchronizing Bluetooth keys between operating systems.

Throughout their careers engineers build systems that protect data and guard it against corruption. But what if the right approach is the opposite: deliberately corrupting data, generating it out of thin air, and creating forgeries indistinguishable from the real thing?
Maksim Gramin, systems analyst at Postgres Professional, explains why creating fake data is a critical skill for testing, security, and development — and how to do it properly without turning your database into a junkyard of “John Smith” entries.
Hello, Habr! I'd like to share my experience developing such a system.
The defining parameters of a domain-specific system are:

Looking for cheap VPS hosting that’s fast, reliable, and fits both personal and business projects? We’ve reviewed more than 20 trusted VPS and VDS providers and compared them by pricing, uptime, features, and support

Hello, Habr! We continue the series of articles on the innovations of the Tantor Postgres 17.5.0 DBMS, and today we will talk about authorization support via OAuth 2.0 Device Authorization Flow is a modern and secure access method that allows applications to request access to PostgreSQL on behalf of the user through an external identification and access control provider, such as Keycloak, which is especially convenient for cloud environments and microservice architectures (the feature will also be available in PostgreSQL 18). In this article, we'll take a step-by-step look at configuring OAuth authorization in PostgreSQL using Keycloak: configure Keycloak, prepare PostgreSQL, write an OAuth token validator in PostgreSQL, and verify successful authorization via psql using Device Flow.

From hype to strategy: how EXANTE redefined Cloud Native after painful Kubernetes mistakes, lessons learned, and building a more resilient infrastructure

The story of the PostgreSQL logo was shared by Oleg Bartunov, CEO of Postgres Professional, who personally witnessed these events and preserved an archive of correspondence and visual design development for the database system.
Our iconic PostgreSQL logo — our beloved “Slonik” — has come a long way. Soon, it will turn thirty! Over the years, its story has gathered plenty of myths and speculation. As a veteran of the community, I decided it’s time to set the record straight, relying on the memories of those who were there. Who actually came up with it? Why an elephant? How did it end up in a diamond, and how did the Russian word “slonik” become a part of the global IT vocabulary?

During load testing of Tantor Postgres databases or other PostgreSQL-based databases using the standard tool pgbench, specialists often encounter non-representative results and the need for repeated tests due to the fact that details of the environment (such as DBMS configuration, server characteristics, PostgreSQL versions) are not recorded. In this article we are going to review author's pg_perfbench, which is designed to address this issue. It ensures that scenarios are repeatable, prevents the loss of important data, and streamlines result comparison by registering all parameters in a single template. It also automatically launches pgbench with TPC-B load generation, collects all metadata on the testing environment, and generates a structured report.
The repo of this script is https://gitlab.com/vitaly‑zdanevich/full‑backup/‑/blob/master/full‑backup.sh
Incremental with hard links means that if a file is not changed, on the next backup it will link to the same underlying data, like deduplication. Hard links — its usual files.
Also, this script ignores .gitignore of every folder.
Run this script from another system.

41 TB/day from Oracle to Postgres Pro without stopping the source system — not theory, but numbers from our latest tests. We broke the migration into three stages: fast initial load, CDC from redo logs, and validation, and wrapped them into ProGate. In this article, we’ll explain how the pipeline works, why we chose Go, and where the bottlenecks hide.

Now that pgpro-otel-collector has had its public release, I’m excited to start sharing more about the tool — and to kick things off, I’m launching a blog series focused entirely on the Collector.
The first post is an intro — a practical guide to installing, configuring, and launching the collector. We’ll also take our first look at what kind of data the collector exposes, starting with good old Postgres metrics.

Postgres Pro recently announced the release of Enterprise Manager 2, commonly known as PPEM.
In short, PPEM is an administration tool designed for managing and monitoring Postgres databases. Its primary goal is to assist DBAs in their daily tasks and automate routine operations. In this article, I'll take a closer look at what PPEM has to offer. My name is Alexey, and I'm part of the PPEM development team.

n8n is a powerful, extendable workflow automation tool that allows you to connect different applications and services. Running it on your local machine gives you complete control over your data and workflows, which can be done on Windows, Mac, or Linux systems. This tutorial covers the two primary methods for local installation: using Docker and using Node.js (npm). If you are interested, then read this article until the end. :)