Pull to refresh

All streams

Show first
Rating limit
Level of difficulty

Тестирование аналитики: зачем QA лезет в данные и как это помогает продукту

Level of difficultyEasy
Reading time5 min
Reach and readers812

На ранних этапах разработки аналитика часто оказывается в тени функциональных требований: фича работает — значит, задача выполнена. Корректность событий, логика их отправки и влияние на будущие отчеты нередко проверяют только после релиза — если вообще доходят до этой проверки. Но аналитика — это полноценный продуктовый инструмент. Именно на результатах аналитики строят гипотезы, запускают A/B-эксперименты и принимают бизнес-решения.

В этой статье Андрей Смирнов, инженер по тестированию Циан, раскроет, почему аналитика — это не «дополнение», а часть фичи, какую роль в ней играет тестировщик, а также как ревью аналитики повлияло на процессы и качество решений в нашей команде. Статья отражает опыт взаимодействия QA и аналитика внутри нашей продуктовой команды.

Читать далее

I Got Tired of Losing Dozens of Tabs and Built Tab Saver: Backing Up Tabs to Google Account with Zero Signups

Reading time3 min
Reach and readers1.4K

I constantly have multiple Chrome windows open, each with a pile of tabs. One window has a dozen slow analytics queries that I'll check "any minute now." Another has my research on $lookup and $unwind in MongoDB. The third one — with the most tabs — has local school enrollment rules, because life.

Tab-saving extensions have existed for ages, but reviews regularly complain about data loss — so why not build my own, designed specifically for backup? Every Chrome user already has a Google account, usually with sync enabled.

Here's how I used it to make Tab Saver extension that backs up to your Google Account without any signups and with zero configuration needed. It's pretty straightforward - you can easily use it as well.

Read more

Sensor-Level AI: A 380-Parameter Architecture Resistant to Drift and Noise

Level of difficultyEasy
Reading time8 min
Reach and readers1.5K

Much attention is currently focused on the size of neural networks and the gigawatts of power consumed by data centers. However, the future lies not only in giant clusters but also in tiny chips embedded directly into the sensing elements of hardware. When a neural network is placed directly inside a sensor chip, it must be exceptionally efficient.

Through experimentation, I have successfully built a neural network architecture with 380 parameters (with potential for further reduction), capable of operating in conditions considered unsuitable for conventional algorithms.

Read more

How to Connect Open WebUI and Cline to the Telegram Cocoon Decentralized Inference Network

Reading time10 min
Reach and readers4.3K

It’s surprising that there’s almost no practical information about Telegram Cocoon beyond what’s on GitHub and the official website. Various media outlets have plenty of general coverage about the network launch, but almost nothing about real user experience.

I decided to spend a bit of time and figure out what’s actually going on in the network, how it works, and, most importantly, whether I, as a developer, can use it today. So in this article I’ll look at Cocoon from a developer’s perspective: how to install it and how to use it.

Read more

Как работает Codex: статья OpenAI

Level of difficultyMedium
Reading time15 min
Reach and readers5.7K

Привет, Хабр! Меня зовут Юра Петров, я руководитель отдела разработки компании Friflex и автор канала «Мобильный разработчик». OpenAI на днях выпустила крутую статью, где впервые подробно описала работу своего агента для написания и изменения кода — Codex CLI. Сердце системы — «агентский цикл» (agent loop).

Это процесс, в котором модель получает задачу от пользователя, при необходимости вызывает инструменты (например, запускает команды в терминале), анализирует результат и повторяет цикл, пока не получит финальный ответ или не внесет нужные изменения в код. Статья фокусируется на том, как устроен этот цикл, как формируются запросы к модели и как система управляет контекстом.

Read more

Postgres Pro Enterprise 18: built-in memory cache and new high‑availability options

Level of difficultyEasy
Reading time4 min
Reach and readers5.5K

Asynchronous I/O, ML-based query plan optimization, and built-in connection pooling are among the key features of the new Postgres Pro Enterprise 18. This release brings together the capabilities of the vanilla PostgreSQL 18 core with Enterprise-grade tools for working with large-scale data. Today we will walk through the technical details, new index scanning strategies, and mechanisms for scaling write workloads.

Read more

UX design in 2026: trends to follow, challenges to overcome

Level of difficultyEasy
Reading time6 min
Reach and readers4.8K

Trends in UX design change rapidly – not even by year, but rather by month. As smartphones and technologies constantly evolve, so do UX rules – that's why a product that was perfectly crafted according to the last year's trends most likely looks outdated already and needs a thorough update. Some trends stand the test of time, though, while others get quickly replaced by more relevant ones. Here are top 10 trends that are going to shape UX design in 2026:

Read more

BlueVein: How I spent a month to avoid wasting 56 hours a year reconnecting Bluetooth devices in dual-boot

Level of difficultyMedium
Reading time5 min
Reach and readers5.2K

Do you switch between Linux and Windows in dual-boot? Then you're probably familiar with this problem: you have to reconnect all your Bluetooth devices every time. Headphones, mouse, keyboard, gamepad — everything has to be reconnected.

It's scary to even think about it:
3 devices × 90 seconds × 3 switches per day × 250 days = 56 hours wasted per year.

I spent a month solving this problem and wrote BlueVein — a utility for automatically synchronizing Bluetooth keys between operating systems.

Read more

Activation Function Stress Test: GELU vs Tanh

Reading time8 min
Reach and readers4.5K

In modern neural networks, including Transformer-based LLMs, unbounded activation functions—ReLU and GELU—have become the standard. Their main advantage is good gradient flow and the rapid training of deep models.

However, in practice, a problem is observed: when dominant patterns or high-frequency noise appear in the input context (long dialogues, noisy data, repetitive or dominant tokens), models become unstable and prone to generation degradation and hallucinations.

In this article, I attempted to find out if the choice of activation function could be fundamentally linked to LLM hallucinations.

Read more

Weight Decay Deep Dive: How Regularization Locks In Old Knowledge Instead of Erasing It

Level of difficultyEasy
Reading time10 min
Reach and readers3.1K

In my previous article, I noted some interesting behavior regarding Weight Decay; here, I examine it in detail.

It is generally accepted in the ML industry that if we take a pre-trained model and fine-tune it on a new task, the old weights are gradually overwritten. Furthermore, if we add Weight Decay (L2 regularization), the process of "forgetting" superfluous information should theoretically happen even faster.

I tested this claim experimentally. The results were counter-intuitive: under specific settings, Weight Decay works in the exact opposite way—it protects the old structure from destruction.

Below is a description of the experiment and conclusions for those involved in model training and AI safety.

Read more

Claude Code with Ollama: No Cloud, No Limits

Level of difficultyEasy
Reading time2 min
Reach and readers6.1K

In January 2026, Ollama added support for the Anthropic Messages API, enabling Claude Code to connect directly to any Ollama model. This tutorial explains how to install Claude Code, pull and run local models using Ollama, and configure your environment for a seamless local coding experience.

Read more

Subliminal Learning and Structural Inertia: Why Neural Networks Remember What They Should Forget

Level of difficultyEasy
Reading time20 min
Reach and readers5K

In my previous article, I explored the phenomenon of subliminal learning, but it raised more questions than answers. It is time to dive deeper. Below, you will find the experiments and the code.

In the fields of AI Alignment and LLM Security, a critical question remains: does fine-tuning or Reinforcement Learning from Human Feedback (RLHF) guarantee the removal of unwanted information?

Spoiler: The experiments demonstrated that the well-known Mode Connectivity effect makes the complete erasure of pre-training information practically impossible during standard fine-tuning. Structural Imprinting persists in the weight topology and can be read through a subliminal channel. Even with full weight unfreezing and aggressive L2 regularization (active forgetting), the latent space topology formed during the pre-training stage persists and determines the solution to the new task with an accuracy of 88–99%.

Read more

PostgreSQL for WMS: a DBMS selection strategy in the era of import substitution

Level of difficultyMedium
Reading time9 min
Reach and readers7.2K

Today we want to talk about choosing a DBMS for WMS not as a dry technical discussion, but as a strategic decision that determines the security, budget, and future flexibility of your business. This is not about "why PostgreSQL is technically better," but about why it has become the only safe, cost-effective, and future-proof solution for Russian warehouse systems in the new reality.

This is not just another database article. It is a roadmap for those who do not want to wake up one day with a paralyzed warehouse and multi-million fines due to a bad decision made yesterday. At INTEKEY we have gone this path deliberately, and today our WMS projects for the largest market players run on PostgreSQL. We know from experience where the pitfalls are and how to avoid them.

Read more

Session Teleportation in Claude Code

Level of difficultyEasy
Reading time4 min
Reach and readers5.5K

Recently, I started using Session Teleportation in Claude Code. It allows you to move an entire conversation, including context, history, and the working branch, between the web and your local terminal.

In this tutorial, I show you how it works and how to use it to make your workflow seamless.

Read more

Apophatic AI: Why Neural Networks Learn Through «NO» and How Synthetic Data Kills Meaning

Level of difficultyEasy
Reading time32 min
Reach and readers4.9K

Modern neural network training often resembles alchemy. We have working recipes, but how exactly a statistical model transforms terabytes of text into understanding remains unclear.

Why is subliminal learning (pattern transmission through noise) possible? Why does training on synthetic data lead to degradation, even when the data appears to be of high quality?

In this article, I propose looking at training architecture from a different angle. The core idea is simple: positive definitions in high-dimensional space are computationally inefficient. A neural network does not learn what an object is. It learns what the object is not, and the model's intelligence depends entirely on the quality of this "NOT."

What follows is the theory, experiments in PyTorch (code included), mathematics, and an explanation of why LLM collapse is highly probable.

Read more

Less routine, more control: PPEM gets smarter

Reading time3 min
Reach and readers8.4K

Bulk config rollouts, built-in OpenTelemetry, and two-click HA cluster control are all part of one goal: making PostgreSQL admin simpler and safer. PPEM 2.3 is a big step toward that — with user-defined presets, a reworked alerting system, and stronger RBAC — helping you bring order to messy configs and trust the system to warn you before things go sideways.

Read more

Codex Skills Deep Dive: Progressive Disclosure, Triggers, and Best Practices

Level of difficultyEasy
Reading time4 min
Reach and readers6.5K

If you are using the Codex CLI and find yourself writing the same instructions over and over again, you are not using the tool to its full potential. Codex offers a powerful feature called Skills that allows you to package reusable workflows and give your AI agent new capabilities on demand. If you want to know more about it, then read this article until the end.

Read more

Intelligent Systems at Phystech: 2025 Year in Review

Reading time22 min
Reach and readers7.3K

As we wrap up another year, it's time to look back at what our department has accomplished. 2025 brought us 42 published papers spanning fundamental ML theory, applied AI systems, and cutting-edge optimization methods—from transformer Hessians and generative models to hallucination detection and matrix-oriented optimizers.

Beyond publications, our students won competitions and defended their theses: 14 Bachelor's, 9 Master's, 3 PhD, and 1 DSc dissertations. They also launched ambitious group research projects. Three of our faculty and alumni received the prestigious Yandex ML Prize, and our head Konstantin Vorontsov was inducted into the Hall of Fame. If you read our summer overview of thesis defences or last winter's year-in-review for 2024, this post continues that story with the next chapter.

In this year-in-review, we dive into the research highlights, share stories from our educational programs, and celebrate the community that makes it all possible.

Read more
1
23 ...