Micro Frontends: A Developer Fad or a Real Business Benefit?

Micro frontends: hype or game-changer? How one team cut costs, sped releases, and built scalable apps without chaos.

Micro frontends: hype or game-changer? How one team cut costs, sped releases, and built scalable apps without chaos.

Everyone knows great documentation makes or breaks a tech product — but few realize how much work goes into it. At Postgres Professional, the docs are written with the same discipline as the code. What’s even more impressive, all of it is done by a team of just ten people. We talked to senior technical writer Ekaterina Gololobova to see how it really works — from the first task to the final commit.

One of the open challenges in the database world is keeping a database consistent across multiple DBMS instances (nodes) that independently handle client connections. The crux of the issue is ensuring that if one node fails, the others keep running smoothly — accepting connections, committing transactions, and maintaining consistency without a hitch. Think of it like a single DBMS instance staying operational despite a faulty RAM stick or intermittent access to multiple CPU cores.
My name is Andrey Lepikhov, and I’d like to kick off a discussion about the multi-master concept in PostgreSQL: its practical value, feasibility, and the tech stack needed to make it happen. By framing the problem more narrowly, we might find a solution that’s genuinely useful for the industry.

In an era dominated by high-level abstractions and a focus on rapid development, the C programming language seems like a relic to many — an "outdated" tool with manual memory management and "dangerous" pointers. But what if these are its greatest strengths?
Maxim Orlov, a programmer at Postgres Professional with 22 years of experience, argues that C is not about quick wins and fast prototypes, but about fundamental control and a deep, philosophical understanding of how computers work. Join us for a journey from an initial frustration with Pascal to a profound appreciation for C, and learn why this "bastion of calm" is more relevant than ever.

I’ve been using the Gemini CLI a lot lately for my coding projects. I really like how it helps me work faster right inside my terminal. But when I first started, I didn’t always get the best results. Over time, I’ve learned some simple tricks that make a huge difference. If you use the Gemini CLI, I want to share my top 10 pro tips. If you are ready, then let’s get started.

When I started working with AWS SageMaker, one of the most common questions was: “Which inference type should I choose for my model?” SageMaker offers four different options, and at first glance, the differences between them aren’t always obvious. Let’s break down when and which approach to use.

In the big data era, data is one of the most valuable assets for enterprises. The ultimate goal of data analytics is to power swift, agile business decision making. As database technologies advance at a breathtaking pace in recent years, a large number of excellent database systems have emerged. Some of them are impressive in wide-table queries but do not work well in complex queries. Some support flexible multi-table queries but are held back by slow query speed.
Each type of data has a data model that best represents them. However, in real business scenarios, there is no such thing as ultra-fast data analytics under the perfect data model. Big data engineers sometimes have to make compromises on data models. Such compromises may cause long latency in complex queries or damage the real-time query performance because engineers must take the trouble to convert complex data models into flat tables.
New business requirements put forward new challenges for database systems. A good OLAP database system must be able to deliver excellent performance in both wide-table and multi-table scenarios. This system must also reduce the workload of big data engineers and enable customers to query data of any dimension in real time without worrying about data construction.

Investing in Application Security (AppSec) and DevSecOps is no longer optional; it's a strategic imperative. However, securing budget and justifying these initiatives requires moving beyond fear and speaking the language of business: Return on Investment (ROI).
This guide provides a structured framework for calculating the costs and benefits of embedding security into your software development lifecycle (SDLC). By understanding and applying concepts like Total Cost of Ownership (TCO), Lifecycle Cost Analysis (LCCA), and Return on Security Investment (ROSI), you can build a compelling financial case, guide your security strategy, and prove tangible value to stakeholders.

Go client for Gotenberg — document conversion service supporting Chromium, LibreOffice, and PDF manipulation engines.
Features
- Chromium: Convert URLs, HTML, and Markdown to PDF
- LibreOffice: Convert Office documents (Word, Excel, PowerPoint) to PDF
- PDF Engines: Merge, split, and manipulate PDFs
- Webhook support: Async conversions with callback URLs
- Stream-first: Built on httpstream for efficient multipart uploads

Stream-first HTTP Client for Go. Efficient, zero-buffer streaming for large HTTP payloads — built on top of net/http.
httpstream provides a minimal, streaming-oriented API for building HTTP requests without buffering entire payloads in memory.Ideal for large JSON bodies, multipart uploads, generated archives, or continuous data feeds.
- Stream data directly via io.Pipe—no intermediate buffers
- Constant memory usage (O(1)), regardless of payload size
- Natural backpressure (writes block when receiver is slow)
- Thin net/http wrapper—fully compatible
- Middleware support: func(http.RoundTripper) http.RoundTripper
- Fluent API for readability (GET, POST, Multipart, etc.)
- No goroutine leaks, no globals
httpstream connects your writer directly to the HTTP transport. Data is transmitted as it's produced, allowing the server to start processing immediately—without waiting for the full body to be buffered.

In a previous article, I proposed the holographic hypothesis: an LLM isn't a database of facts, but an interference field—a landscape of probabilities shaped by billions of texts. But a static landscape is just potential. How does the model actually move through it? How does it choose one specific answer from infinite possibilities?
This is where the Narrative Engine comes in. If the holographic hypothesis describes the structure of an LLM's "mind," the narrative engine hypothesis describes its dynamics. It is the mechanism that drives the model, forcing its probabilistic calculations to follow the coherent pathways of stories. This article critiques modern prompting techniques through this new lens, arguing that we are not programming a machine, but initiating a narrative.

Apache Druid has been a staple for real-time analytics. However, with evolving and sophisticated analytics demands, it has faced challenges in satisfying modern data performance needs. Enter StarRocks, a high-performance, open-source analytical database, designed to adeptly meet the advanced analytics needs of contemporary enterprises by offering robust capabilities and performance.
In this article, we’ll explore the functionalities, strengths, and challenges of both Apache Druid and StarRocks. Using practical examples and benchmark results, we aim to guide you in identifying which database might best meet your data needs.

Alright. I pose the same question to an LLM in various forms. And this statistical answer generator, this archive of human knowledge, provides responses that sometimes seem surprisingly novel, and other times, derivative and banal.
On Habr, you'll find arguments that an LLM is incapable of novelty and creativity. And I'm inclined to agree.
You'll also find claims that it shows sparks of a new mind. And, paradoxically, I'm inclined to agree with that, too.
The problem is that we often try to analyze an LLM as a standalone object, without fully grasping what it is at its core. This article posits that the crucial question isn't what an LLM knows or can do, but what it fundamentally is.

The Problem: Traditional phishing emails are relatively easy to spot. AI-generated ones are not.
python.

Traditional approaches to SQL query generation often rely on instruction-tuned language models, but these can be inefficient and inaccurate. In this article, we’ll explore a new method based on reinforcement learning for model fine-tuning, which can improve both the accuracy and efficiency of SQL generation.

Hello, Habr! We continue the series of articles on the innovations of the Tantor Postgres 17.5.0 DBMS, and today we will talk about authorization support via OAuth 2.0 Device Authorization Flow is a modern and secure access method that allows applications to request access to PostgreSQL on behalf of the user through an external identification and access control provider, such as Keycloak, which is especially convenient for cloud environments and microservice architectures (the feature will also be available in PostgreSQL 18). In this article, we'll take a step-by-step look at configuring OAuth authorization in PostgreSQL using Keycloak: configure Keycloak, prepare PostgreSQL, write an OAuth token validator in PostgreSQL, and verify successful authorization via psql using Device Flow.

Exposed is an SQL library for Kotlin with DSL and DAO APIs for database interactions. While it comes with support for standard SQL data types, you can extend its functionality by creating custom column types.
Custom column types are useful when Exposed lacks support for specific database types (like PostgreSQL's enum, inet or ltree) or when you want to map columns to domain-specific types that better align with your business logic. By implementing custom columns, you gain control over data storage and retrieval while maintaining type safety.
In this article, we'll explore how to create custom column types in Exposed by creating a simple column type for PostgreSQL's enum.

The “test everything” principle doesn’t improve data quality — it destroys it. Hundreds of useless alerts create noise that drowns out truly important signals, and the team stops responding to them. Google and Monzo have already moved away from this approach.
Here’s how to shift from blanket testing to targeted checks at nodes with the greatest impact radius — and why one well-placed test at the source is worth more than a hundred checks downstream.

People have always valued privacy. Developments of the past decades — the internet, social networks, targeted advertising — turned data into an asset. The AI wave multiplies what can be inferred from crumbs. Phones and apps are integral to people’s lives. Some users keep everything on their phones; others are more restrictive. It shouldn’t rely only on user awareness: developers should provide the first line of defence and the tools that protect a user’s right to privacy. Even if you already deal with most of these pieces daily, I want to share my mental model — how I frame decisions with checklists and a few concrete examples from practice.

The myth of the magical fast=true parameter is still alive and well, but in distributed databases, another contender appears: distributed=true. Neither one will save you if you don’t rethink your schema, sharding keys, sequences, queries, and migration process. We walk through every corner with a clear-eyed approach — from choosing sharding keys and colocated tables to CDC, topologies, and foreign key constraints — showing where performance really improves, where it gets more expensive, and how to deal with it.