Pull to refresh

Development

Show first
Rating limit
Level of difficulty

Postgres Pro Enterprise 18: built-in memory cache and new high‑availability options

Level of difficultyEasy
Reading time4 min
Reach and readers3K

Asynchronous I/O, ML-based query plan optimization, and built-in connection pooling are among the key features of the new Postgres Pro Enterprise 18. This release brings together the capabilities of the vanilla PostgreSQL 18 core with Enterprise-grade tools for working with large-scale data. Today we will walk through the technical details, new index scanning strategies, and mechanisms for scaling write workloads.

Read more

BlueVein: How I spent a month to avoid wasting 56 hours a year reconnecting Bluetooth devices in dual-boot

Level of difficultyMedium
Reading time5 min
Reach and readers3.5K

Do you switch between Linux and Windows in dual-boot? Then you're probably familiar with this problem: you have to reconnect all your Bluetooth devices every time. Headphones, mouse, keyboard, gamepad — everything has to be reconnected.

It's scary to even think about it:
3 devices × 90 seconds × 3 switches per day × 250 days = 56 hours wasted per year.

I spent a month solving this problem and wrote BlueVein — a utility for automatically synchronizing Bluetooth keys between operating systems.

Read more

Activation Function Stress Test: GELU vs Tanh

Reading time8 min
Reach and readers3.1K

In modern neural networks, including Transformer-based LLMs, unbounded activation functions—ReLU and GELU—have become the standard. Their main advantage is good gradient flow and the rapid training of deep models.

However, in practice, a problem is observed: when dominant patterns or high-frequency noise appear in the input context (long dialogues, noisy data, repetitive or dominant tokens), models become unstable and prone to generation degradation and hallucinations.

In this article, I attempted to find out if the choice of activation function could be fundamentally linked to LLM hallucinations.

Read more

Weight Decay Deep Dive: How Regularization Locks In Old Knowledge Instead of Erasing It

Level of difficultyEasy
Reading time10 min
Reach and readers2K

In my previous article, I noted some interesting behavior regarding Weight Decay; here, I examine it in detail.

It is generally accepted in the ML industry that if we take a pre-trained model and fine-tune it on a new task, the old weights are gradually overwritten. Furthermore, if we add Weight Decay (L2 regularization), the process of "forgetting" superfluous information should theoretically happen even faster.

I tested this claim experimentally. The results were counter-intuitive: under specific settings, Weight Decay works in the exact opposite way—it protects the old structure from destruction.

Below is a description of the experiment and conclusions for those involved in model training and AI safety.

Read more

Claude Code with Ollama: No Cloud, No Limits

Level of difficultyEasy
Reading time2 min
Reach and readers4.4K

In January 2026, Ollama added support for the Anthropic Messages API, enabling Claude Code to connect directly to any Ollama model. This tutorial explains how to install Claude Code, pull and run local models using Ollama, and configure your environment for a seamless local coding experience.

Read more

Subliminal Learning and Structural Inertia: Why Neural Networks Remember What They Should Forget

Level of difficultyEasy
Reading time20 min
Reach and readers4.2K

In my previous article, I explored the phenomenon of subliminal learning, but it raised more questions than answers. It is time to dive deeper. Below, you will find the experiments and the code.

In the fields of AI Alignment and LLM Security, a critical question remains: does fine-tuning or Reinforcement Learning from Human Feedback (RLHF) guarantee the removal of unwanted information?

Spoiler: The experiments demonstrated that the well-known Mode Connectivity effect makes the complete erasure of pre-training information practically impossible during standard fine-tuning. Structural Imprinting persists in the weight topology and can be read through a subliminal channel. Even with full weight unfreezing and aggressive L2 regularization (active forgetting), the latent space topology formed during the pre-training stage persists and determines the solution to the new task with an accuracy of 88–99%.

Read more

PostgreSQL for WMS: a DBMS selection strategy in the era of import substitution

Level of difficultyMedium
Reading time9 min
Reach and readers6.4K

Today we want to talk about choosing a DBMS for WMS not as a dry technical discussion, but as a strategic decision that determines the security, budget, and future flexibility of your business. This is not about "why PostgreSQL is technically better," but about why it has become the only safe, cost-effective, and future-proof solution for Russian warehouse systems in the new reality.

This is not just another database article. It is a roadmap for those who do not want to wake up one day with a paralyzed warehouse and multi-million fines due to a bad decision made yesterday. At INTEKEY we have gone this path deliberately, and today our WMS projects for the largest market players run on PostgreSQL. We know from experience where the pitfalls are and how to avoid them.

Read more

Session Teleportation in Claude Code

Level of difficultyEasy
Reading time4 min
Reach and readers4.6K

Recently, I started using Session Teleportation in Claude Code. It allows you to move an entire conversation, including context, history, and the working branch, between the web and your local terminal.

In this tutorial, I show you how it works and how to use it to make your workflow seamless.

Read more

Less routine, more control: PPEM gets smarter

Reading time3 min
Reach and readers7.8K

Bulk config rollouts, built-in OpenTelemetry, and two-click HA cluster control are all part of one goal: making PostgreSQL admin simpler and safer. PPEM 2.3 is a big step toward that — with user-defined presets, a reworked alerting system, and stronger RBAC — helping you bring order to messy configs and trust the system to warn you before things go sideways.

Read more

Codex Skills Deep Dive: Progressive Disclosure, Triggers, and Best Practices

Level of difficultyEasy
Reading time4 min
Reach and readers5.8K

If you are using the Codex CLI and find yourself writing the same instructions over and over again, you are not using the tool to its full potential. Codex offers a powerful feature called Skills that allows you to package reusable workflows and give your AI agent new capabilities on demand. If you want to know more about it, then read this article until the end.

Read more

How to speed up mass data inserts in PostgreSQL when using Spring

Level of difficultyHard
Reading time17 min
Reach and readers8.6K

A common task in enterprise systems is to load large volumes of data into PostgreSQL — sometimes tens or even hundreds of millions of rows. At first glance, this seems simple: just write a loop in Java and call save() for every record. But in reality, such an approach can be painfully slow. Even a perfectly tuned PostgreSQL instance won’t help if the application is sending data inefficiently.

This article explains how to significantly accelerate bulk inserts when working with PostgreSQL through Spring and Hibernate. We’ll walk through which Spring and Hibernate settings are worth enabling, why they matter, and how much performance they can actually unlock. We’ll also look at how to build your own data-insertion layer for PostgreSQL — one that lets you switch between different insertion strategies, leverage PostgreSQL’s custom capabilities, and parallelize the process. Finally, we’ll see how to integrate this layer with Spring and what real gains each approach can deliver.

Read more

Planting commits in Siberia: Postgres Pro opens in Akademgorodok

Level of difficultyEasy
Reading time5 min
Reach and readers7K

Some IT companies say they support open source. In practice, that often boils down to using other people’s code and a bit of PR. We believe real contribution means commits to the core. And to do that consistently, we opened an engineering center not in a glossy capital business park, but in a place where fundamental science is part of the cultural DNA. Here’s why we’re building the future of systems programming in Novosibirsk Akademgorodok.

Read more

File handling in PostgreSQL: barriers and ways around them

Level of difficultyMedium
Reading time9 min
Reach and readers9.9K

Hitting the 4-billion-row limit in a TOAST table or running into an OidGen lock during a massive document import is a PostgreSQL admin’s nightmare. Sure, architects will tell you to push files to S3 — but real life often means keeping them inside the database. In this post, application optimization lead Alexander Popov breaks down how the standard bytea and pg_largeobject mechanisms work, where their bottlenecks hide, and how Postgres Pro Enterprise helps you get around those limits.

Read more

Delivering Faster Analytics at Pinterest

Level of difficultyMedium
Reading time6 min
Reach and readers6.9K

Pinterest is a visual discovery platform where people can find ideas like recipes, home and style inspiration, and much more. The platform offers its partners shopping capabilities as well as a significant advertising opportunity with 500+ million monthly active users. Advertisers can purchase ads directly on Pinterest or through partnerships with advertising agencies. Due to our huge scale, advertisers get an opportunity to learn about their Pins and their interaction with Pinterest users from the analytical data. This gives advertisers an opportunity to make decisions which will allow their ads to perform better on our platform.

Read more

Guide to AI Coding Agents & Assistants: How to Choose the Right One

Level of difficultyEasy
Reading time7 min
Reach and readers8.8K

There are now so many AI tools for coding that it can be confusing to know which one to pick. Some act as simple helpers (Assistant), while others can do the work for you (Agent). This guide breaks down the top AI coding tools that you should be aware of. We will look at what they do, who they are for, and how much they cost.

Read more

Top 24 Free Neural Networks & AI Services for Every Occasion

Level of difficultyEasy
Reading time9 min
Reach and readers8K

2025. Algorithms have seamlessly integrated into our lives—from work to education, creativity, and daily routines. They edit texts, select fonts, generate ideas, assist with coding, compose music, and more. Frankly speaking, the only thing they can’t do yet is brew your coffee. Although... that might just be a matter of time.

Just two years ago, we were amazed by neural networks hesitantly manipulating objects in photos. Who could predict back then that Will Smith’s spaghetti feast would mark the beginning of such a revolution?

With new opportunities come fresh challenges. How do you navigate this vast landscape? What tools are truly effective? Which ones fit your needs best? Where can you avoid paying, registering, or deciphering complex interfaces?

We’ve compiled a list of reliable and user-friendly neural networks ready for immediate use without unnecessary hassles. The services are categorized neatly: text generation, image creation, video production, music composition, presentations, and much more. Each category showcases three top-rated options!

Yes, many services offer paid subscriptions. But today, we're focusing solely on what works freely, no credit card required!

Read more

Breaking data for fun

Level of difficultyEasy
Reading time8 min
Reach and readers6.5K

Throughout their careers engineers build systems that protect data and guard it against corruption. But what if the right approach is the opposite: deliberately corrupting data, generating it out of thin air, and creating forgeries indistinguishable from the real thing?

Maksim Gramin, systems analyst at Postgres Professional, explains why creating fake data is a critical skill for testing, security, and development — and how to do it properly without turning your database into a junkyard of “John Smith” entries.

Read more
1
23 ...