Today we want to talk about choosing a DBMS for WMS not as a dry technical discussion, but as a strategic decision that determines the security, budget, and future flexibility of your business. This is not about "why PostgreSQL is technically better," but about why it has become the only safe, cost-effective, and future-proof solution for Russian warehouse systems in the new reality.

This is not just another database article. It is a roadmap for those who do not want to wake up one day with a paralyzed warehouse and multi-million fines due to a bad decision made yesterday. At INTEKEY we have gone this path deliberately, and today our WMS projects for the largest market players run on PostgreSQL. We know from experience where the pitfalls are and how to avoid them.
The new reality
The context after 2022 is familiar to everyone: a mass exit of foreign vendors, including giants like Oracle and Microsoft SQL Server. But behind that are not just inconveniences, but systemic risks that many still try to ignore, hoping to "get by" on old licenses.
Companies, especially state-owned, banks, and large businesses, now face a hard dilemma: stay on old but risky solutions, ignoring not only the MinTsifry registry requirements, but also restrictions for critical information infrastructure and regulatory standards from FSTEC and FSB on data handling, or find a new path. We are convinced: for warehouse systems (WMS) there is only one path - PostgreSQL and its commercial forks. This is not blind faith in open source; it is a cold calculation based on risk and opportunity analysis. It is exactly the case where the right strategic choice is also the most practical.
Why not Oracle? Three key risks that turn a license into a "time bomb"
Let's call things by their names. Continuing to operate foreign DBMSs is not conservatism, it is a direct threat to business continuity. It is like building the foundation of a new warehouse on ground that could slide out from under you at any moment.
Regulatory risk: why critical infrastructure is not "someone else" — it is you
The Jira and Confluence shutdowns are not a scary tale, but a precedent. And the critical infrastructure situation is far more serious than it looks.
Many think: "We are not critical infrastructure, this does not affect us". That is a dangerous misconception. Here is why:
Starting September 1, 2025 the use of foreign software in government bodies and companies with more than 50% state ownership is legally restricted. If you work with them as a contractor or partner, your WMS on Oracle can become a barrier to doing business.
Your status can change. The criteria for being classified as critical infrastructure keep expanding. Today you are not CI, tomorrow your entire industry may fall under regulation. A vivid example is the draft law equating ERP systems with critical infrastructure. WMS is a natural candidate for the next list.
The regulatory trend is obvious. The state consistently tightens requirements for software in use. Choosing Oracle today is a direct regulatory risk. Tomorrow an urgent migration may become mandatory, with huge costs and outage risks.
Financial risk. High Oracle license fees are no longer the "cost of reliability," but a "tribute to a fading era." Investing millions in technologies that can be legally banned tomorrow is short-sighted and economically irrational. We have seen invoices in the tens of millions of rubles for licensing that businesses had to pay just to "avoid rocking the boat." But the boat is already leaking, and patching it with banknotes is a questionable idea.
Business continuity risk. Support, security updates, critical bug fixes - the vendor can stop all of that unilaterally at any time. What will you do when a vulnerability is found in your running warehouse and there is no one to patch it? Or when a new OS update requires a DBMS patch you will never get? This is not a hypothetical threat, but a reality that hundreds of companies have already faced.
PostgreSQL: from open-source to a new enterprise standard
So what is PostgreSQL today? It is not just a "free replacement," but a full-featured, reliable open-source DBMS with a strong Russian ecosystem around it. Over the years we have battle-tested it on real projects with hundreds of concurrent connections and terabytes of data.
Right now the choice is between two main paths, and both lead toward independence:
Vanilla PostgreSQL. A solid baseline option for most typical tasks and small to mid-size WMS projects: it is free, open, and has all the functionality needed for warehouse systems with moderate data volumes and load. Its development is supported not only by thousands of developers worldwide, but also by a fairly large Russian-speaking community that actively shares practices, extensions, and solutions tailored to local realities. For many projects at this scale, it is more than enough.
Commercial forks (for example, Postgres Pro). A solution for the hardest tasks. These are customized versions with unique features, horizontal scaling (sharding), and full enterprise support. They fill the niche previously held by Oracle and MS SQL Server. And critically, these are Russian companies that will not disappear from the market or turn off support.
Enterprise level: when vanilla PostgreSQL is not enough
Yes, vanilla PostgreSQL has limits, and as an integrator, we must say it. Our practice and expertise show that under high load with data volumes from 3 TB, the standard version can start to lose performance. Large operations, such as mass inventory in a big warehouse zone or calculating optimal routes for several hundred pickers at once, can take unacceptably long.
Product Director at Postgres Professional, Artem Galonsky:
"The Postgres Pro family was originally designed for serious workloads. That is why they are активно used by large state and private customers. In particular, we managed to handle large data volumes in the GIS GMP database, which stores transactions of the Russian Treasury. Load testing proves our developments are ready for high loads both in volume and in data processing speed".
What does Postgres Pro offer?
Deep kernel changes. Postgres Pro codebase is more than 2x larger than vanilla due to optimizations and customization for extreme loads. They have tuned index mechanisms, the query planner, and caching system, which yields a 2-3x performance boost on typical WMS operations (receiving, picking, inventory).
Sharding. For extreme loads there is a ready solution - Postgres Pro Shardman. It has been experimentally proven that a petabyte of data can be loaded into a sharded cluster, opening the path for systems with practically unlimited volume. Imagine a WMS for a федеральная network where the movement history of every item is stored for years and available for real-time analytics - this is no longer science fiction.
Optimization for 1C. Postgres Pro Enterprise accounts for the особенности of this platform, which is critical for many Russian companies where 1C is the number-one accounting system.
WMS and PostgreSQL: technical requirements and how they are met
What exactly does a WMS require from a DBMS? It is not just "store data," but ensure 24/7 operation with very high transaction intensity. A delay of fractions of a second during receiving can lead to accounting discrepancies, and a slow response during picking can break shipment schedules and lead to penalties.
Here are the key takeaways from dozens of successful implementations.
Version matters: why you cannot skimp on being up to date
The minimum threshold is PostgreSQL 14. This is not a whim but a necessity: this version introduced optimizer and indexing features that became critical for modern WMS. But we strongly recommend the latest stable versions (15, 16). Why? Because we saw the difference. In one project, a simple upgrade from version 12 to 14 gave a 15% speed boost for complex turnover reports. No code changes, only internal DBMS improvements.The three pillars of performance: parameters that decide everything
max_connections: Hit this limit and you get a queue. 100 and above. Each warehouse worker web session, each handheld terminal connection, each background process - all are separate connections. At peak hours, when the warehouse is boiling, underestimating this parameter guarantees queues and "freezes." We always calculate it with a 2x margin.shared_buffers: 25% of RAM is the golden rule. Our WMS aggressively caches reference data (SKU catalog, bins) and current stock. If buffers are insufficient, the system constantly hits disk and performance drops multiple times. It is like trying to work with data not from fast RAM but from a slow hard drive.work_mem: A small thing that changes everything. Value from 64 MB. Our platform constantly sorts and hashes data: when building routes, forming picking tasks, in reports. Lack of this memory pushes operations to disk. We have seen in practice how increasingwork_memfrom 4 MB to 128 MB reduced the time for a key stock report from 3 minutes to 10 seconds. This is not an optimization - it is a fundamentally different responsiveness level.
Why we trust PostgreSQL for industrial operations
We were convinced not only by theory, but by its reliable architecture. The ability to configure hot replicas for redundancy is not an extra option, but a mandatory requirement for any critical warehouse. And tools like the pg_stat_statements extension are our main helper in finding and eliminating bottlenecks. It allows us to optimize slow queries precisely, keeping high performance as projects grow.
Looking ahead: advanced capabilities for modern WMS
A modern WMS is no longer just an accounting system. It is a data center for management decisions. And here the PostgreSQL ecosystem offers solutions that closed competitors simply do not have.
Real-time analytics. Solutions like Postgres Pro AXS and Angri allow running complex analytical queries прямо on the operational database without exporting data to a separate warehouse. That means a manager can in real time see not only current stock, but also sales trends, forecast peak loads, and redistribute resources.
Vector databases and AI. This is not futurism anymore, but the near future. Modern warehouses require not just accounting, but analytics, forecasting, and intelligent search. Regular DBMSs are inefficient for vector representations used in neural networks.
Product Director at Postgres Professional, Artem Galonsky:
"We actively develop this direction. We already have ready modules that let you work with vector data directly inside Postgres Pro *- without moving logic to separate services and without deploying specialized vector DBMSs. This opens possibilities for intelligent WMS and related warehouse and supply-chain management systems.
What this gives in practice:
Intelligent search across the product catalog: for example, "find all products similar to this one" by meaning (semantics), not by exact text match. This is especially important for warehouses with large assortments where the same item may be named differently and descriptions are incomplete or inconsistent.
Automatic classification of products and documents using neural networks: the system can learn to identify categories by image, description, or a combination of features and then apply routing, placement, and data quality rules.
Internal chatbots and assistants for employees: answers to questions like "How many units of item X are in zone A?" or "Which positions most often end up mismatched?" in natural language - with access to data and business logic inside the system.
And the key point: we develop data analytics so as to close the typical "zoo of solutions" problem, where the operational DBMS lives separately, data marts and DWH live separately, the BI layer lives separately, and all of this requires complex integration, synchronization, and data duplication. Our approach allows a single DBMS not only to reliably store transactional data, but also to perform research and analytics on the same data (without constant exports and copying), which becomes a step toward a unified ecosystem combining OLTP and OLAP. The result is "two in one": transactional reliability, intelligent functions, and analytical integrity within a single platform."
Practical steps: what to focus on when choosing and migrating
So how do you make the right decision without falling for hype or conservative fears? We developed a clear algorithm based on dozens of successful migrations.
When to choose "vanilla" PostgreSQL? For standard WMS, medium data volumes (up to 15-20 TB), when the budget is limited and you have in-house competence to support it. It is a reliable and free foundation.
When should you consider Postgres Pro? For large enterprise solutions, complex analytics, extreme data volumes (>20-30 TB), and when you need Russian 24/7 support. When you cannot afford downtime and need guaranteed vendor response time.
Decision checklist
Estimate data volumes and peak load. Calculate not only current but also projected volumes for the next 3-5 years. Analyze peak load - the "X hour" when 100+ pickers are working simultaneously and mass receiving is underway.
Check legal requirements. Make sure the chosen solution matches your industry's requirements and the MinTsifry registry. For the public sector and related industries, this is not a recommendation but a mandatory requirement.
Calculate TCO for 3-5 years. Include not only licenses (if any) but also support costs, updates, customization, and staff admin salaries. Compare with the TCO of your current solution. You will be surprised how "free" Oracle really is.
Request documentation and test environments. Nothing replaces hands-on testing on your data and typical operations. Ask the vendor for a test stand and run load testing that simulates your hottest day.
Conclusion
Choosing PostgreSQL today is not just a technical decision. It is a strategic move toward security, economic efficiency, and technological independence for your business. It is an investment in a predictable future without surprises like blocked licenses or sudden regulatory bans.
Do not wait until you must change DBMS in emergency mode under pressure. Start planning the transition now. Modern Russian solutions based on PostgreSQL, as our experience and our partners' experience show, are more than ready for the toughest challenges of modern warehouses. They have proven their reliability not in laboratory conditions but in real distribution centers, where every millisecond of downtime costs tens of thousands of rubles.
This is exactly the case where the right decision is both the most profitable and the safest. Do not follow the herd and do not cling to outdated technologies. Take a calculator, compute TCO, test, and make a balanced decision. Your future IT director will thank you for it.
Which factors are key for you when choosing a DBMS for critical infrastructure? Have you already faced the need to change your technology stack? Share your experience in the comments — let's discuss the most painful and interesting cases.
