For years, we lived by a simple rule: one task — one tool. We used MongoDB for documents, ClickHouse or something similar for analytics, good old PostgreSQL for transactions? That model worked… until it didn’t.
Recent years have shown that this idea is starting to crack. After a long love affair with specialization, the market is making a full dialectical turn and coming back to the idea of a super-universal database.

Why are we inevitably moving toward new “monsters” (in a good way), and what architectural shifts lie ahead? We discussed this with Mark Rivkin, Head of technical consulting at Postgres Professional. This article reflects the author’s personal views, based on years of hands-on experience.
The myth of “vanilla is enough”
Globally, the database market is conservative. Oracle and Microsoft still dominate enterprise workloads. Enterprises value predictability, maturity, and features that have been battle-tested for decades.
The big insight from recent large-scale migrations is uncomfortable but clear: pure open source, out of the box, often isn’t enough for real enterprise needs. When companies used to the “comfort package” of Oracle — clustering, advanced security, management tooling, advisors — moved to vanilla PostgreSQL, many experienced a cultural shock.

Mark Rivkin
Head of technical consulting at Postgres Professional
We always knew PostgreSQL was a solid, developer-friendly mid-range database. But once large enterprises had to rely on it for mission-critical systems, it became obvious that vanilla PostgreSQL lacks many enterprise-grade features: reliability, manageability, security at the level big businesses expect. Today, the amount of code we’ve added to Postgres Pro Enterprise is comparable to the size of upstream PostgreSQL itself — and that effort was just to meet real enterprise requirements.
This raises an awkward question for the community: is the idea “we’ll just take free PostgreSQL and everything will be fine” an illusion for large systems? In many cases — yes.
The market is splitting into two groups: those who mostly rebrand existing software and those who invest massive engineering effort into turning an open-source core into a true enterprise platform.
The slow death of specialized databases
Not long ago, it felt like the future belonged to niche products. But reality had other plans. Real business problems increasingly require heterogeneous data in a single query.
Imagine a disaster-response scenario: predicting flood damage in a city. One query may need to combine:
geospatial data (river paths, flood zones),
relational data (buildings, number of floors, vulnerable residents),
unstructured content (floor plans as images, JSON descriptions).
Building a Frankenstein stack of five specialized databases, then trying to integrate them and keep data consistent, is operational madness. We’re rediscovering what Oracle understood long ago: a database must be universal. It has to digest JSON, geospatial data, vectors, and classic relational data — inside one system.”
The future belongs to converged databases. Not those that do one thing perfectly, but those that do everything well enough — and in one place.
HTAP: the Holy Grail of transactions and analytics
For decades, the split between OLTP (fast transactions) and OLAP (heavy analytics) felt sacred. But businesses want real-time analytics, not reports that lag behind by hours or days while data is copied into a warehouse. This demand gave rise to HTAP (Hybrid Transactional/Analytical Processing).
In the past, when customers asked for analytics, we honestly said: "Sorry, that’s not PostgreSQL’s job". Vanilla PostgreSQL with its forks were always OLTP-focused. Today, we can run analytical workloads directly inside Postgres Pro Enterprise.
How does this work? By combining different storage approaches: row-based storage for transactional workloads; columnar storage for analytics, which is dramatically more efficient for aggregations. The really interesting part is bringing Lakehouse concepts into the database core itself.
Some data lives in regular tables, while other data is stored as Parquet files. Thanks to columnar layout, compression, min/max metadata, row-group pruning, and SIMD instructions, performance can be insane. Queries that take seconds on a traditional relational setup can run 20, 30, even 50 times faster.
This changes not just performance, but user experience. An analyst who used to wait a minute for a filter dropdown to populate now gets results instantly. That’s HTAP in action: transactions and analytics stop being two separate worlds.
AI in databases: autonomous assistant or perfect spy?
AI in databases is not just about a chatbot that writes SQL-queries for you. The real vision is an autonomous database that tunes, protects, and optimizes itself. This isn’t science fiction—commercial vendors started moving in this direction years ago.
It’s not just a database. It’s a tightly integrated system with self-management built in. Behind the scenes, AI-based software analyzes workloads, tunes parameters automatically, and helps enforce security.
AI is already reshaping everyday work:
DBAs offload monitoring, tuning, and bottleneck detection.
Developers use AI tools that write code, spot subtle bugs, optimize queries, and even translate SQL dialects.
Support teams use AI to analyze logs, match incidents with known cases, and suggest fixes.
But there are two dark sides to this revolution.
1. The security paradox. To be useful, an AI assistant needs access to metadata — and often to real data.
This caught me off guard. For AI to actually help with an application, it must see schemas and data. At that moment, your carefully designed security model starts falling apart. The new challenge is how to give AI enough access to be helpful — without turning it into a perfect spy.
2. The trust crisis. AI still hallucinates.
We’re used to computers being wrong only when there’s a bug. AI is different. Ask it about some obscure fork, and it might confidently invent an entire backstory. Until this problem is solved, AI in critical systems requires constant human — or AI-based — oversight.
Hardware is dictating architecture again
For a long time, enterprise databases assumed a simple model: data lives on disk, frequently accessed data is cached in RAM. Persistent memory changes the rules. If your entire database can live in non-volatile memory, many classic mechanisms — logging, buffering, recovery — must be rethought from scratch.
Another trend is bypassing the operating system.
Databases already look like operating systems: they manage memory, concurrency, scheduling. The logical next step is direct access to raw hardware, skipping the general-purpose OS layer. It’s hard, expensive, and complex — but that’s where high-performance systems are heading.
SQL: the language that will outlive us all
Despite countless attempts to replace SQL with specialized APIs, it remains the lingua franca of data.
The reason is simple: SQL absorbs new paradigms instead of fighting them. JSON? Added to SQL. Geospatial data? SQL handles it. Graphs and vectors? Also becoming part of SQL.
The right approach by Oracle isn’t thousands of helper functions, but evolving the language itself. SQL is incredibly resilient and understandable. There are no signs of its death — only adaptation.
A challenge to the community
If we’re serious about the future of databases, we need to stop romanticizing performance for performance’s sake and niche elegance for conference talks.
The future is platform-centric:
one database handling mixed workloads;
one language that evolves without fragmenting the ecosystem;
one management layer controlling hardware, OS, and database;
one AI layer that assists DBAs, developers, and support engineers — and explains its decisions;
one goal: ship applications faster and more safely, because applications — not databases — are what businesses actually need.
In that sense, “specialized universality” is not an oxymoron. It’s a roadmap for the next 5–10 years. And yes — we’re keeping SQL. We’re just done being shy about its maturity.