Pull to refresh

My feed

Type
Rating limit
Level of difficulty
Warning
To set up filters sign in or sign up
Article

How to Connect Open WebUI and Cline to the Telegram Cocoon Decentralized Inference Network

Reading time10 min
Reach and readers2.7K

It’s surprising that there’s almost no practical information about Telegram Cocoon beyond what’s on GitHub and the official website. Various media outlets have plenty of general coverage about the network launch, but almost nothing about real user experience.

I decided to spend a bit of time and figure out what’s actually going on in the network, how it works, and, most importantly, whether I, as a developer, can use it today. So in this article I’ll look at Cocoon from a developer’s perspective: how to install it and how to use it.

Read more
Article

Как работает Codex: статья OpenAI

Level of difficultyMedium
Reading time15 min
Reach and readers4.7K

Привет, Хабр! Меня зовут Юра Петров, я руководитель отдела разработки компании Friflex и автор канала «Мобильный разработчик». OpenAI на днях выпустила крутую статью, где впервые подробно описала работу своего агента для написания и изменения кода — Codex CLI. Сердце системы — «агентский цикл» (agent loop).

Это процесс, в котором модель получает задачу от пользователя, при необходимости вызывает инструменты (например, запускает команды в терминале), анализирует результат и повторяет цикл, пока не получит финальный ответ или не внесет нужные изменения в код. Статья фокусируется на том, как устроен этот цикл, как формируются запросы к модели и как система управляет контекстом.

Read more
Article

Postgres Pro Enterprise 18: built-in memory cache and new high‑availability options

Level of difficultyEasy
Reading time4 min
Reach and readers4.9K

Asynchronous I/O, ML-based query plan optimization, and built-in connection pooling are among the key features of the new Postgres Pro Enterprise 18. This release brings together the capabilities of the vanilla PostgreSQL 18 core with Enterprise-grade tools for working with large-scale data. Today we will walk through the technical details, new index scanning strategies, and mechanisms for scaling write workloads.

Read more
Article

UX design in 2026: trends to follow, challenges to overcome

Level of difficultyEasy
Reading time6 min
Reach and readers4.2K

Trends in UX design change rapidly – not even by year, but rather by month. As smartphones and technologies constantly evolve, so do UX rules – that's why a product that was perfectly crafted according to the last year's trends most likely looks outdated already and needs a thorough update. Some trends stand the test of time, though, while others get quickly replaced by more relevant ones. Here are top 10 trends that are going to shape UX design in 2026:

Read more
Article

BlueVein: How I spent a month to avoid wasting 56 hours a year reconnecting Bluetooth devices in dual-boot

Level of difficultyMedium
Reading time5 min
Reach and readers4.6K

Do you switch between Linux and Windows in dual-boot? Then you're probably familiar with this problem: you have to reconnect all your Bluetooth devices every time. Headphones, mouse, keyboard, gamepad — everything has to be reconnected.

It's scary to even think about it:
3 devices × 90 seconds × 3 switches per day × 250 days = 56 hours wasted per year.

I spent a month solving this problem and wrote BlueVein — a utility for automatically synchronizing Bluetooth keys between operating systems.

Read more
Article

Activation Function Stress Test: GELU vs Tanh

Reading time8 min
Reach and readers4.1K

In modern neural networks, including Transformer-based LLMs, unbounded activation functions—ReLU and GELU—have become the standard. Their main advantage is good gradient flow and the rapid training of deep models.

However, in practice, a problem is observed: when dominant patterns or high-frequency noise appear in the input context (long dialogues, noisy data, repetitive or dominant tokens), models become unstable and prone to generation degradation and hallucinations.

In this article, I attempted to find out if the choice of activation function could be fundamentally linked to LLM hallucinations.

Read more
Article

Weight Decay Deep Dive: How Regularization Locks In Old Knowledge Instead of Erasing It

Level of difficultyEasy
Reading time10 min
Reach and readers2.7K

In my previous article, I noted some interesting behavior regarding Weight Decay; here, I examine it in detail.

It is generally accepted in the ML industry that if we take a pre-trained model and fine-tune it on a new task, the old weights are gradually overwritten. Furthermore, if we add Weight Decay (L2 regularization), the process of "forgetting" superfluous information should theoretically happen even faster.

I tested this claim experimentally. The results were counter-intuitive: under specific settings, Weight Decay works in the exact opposite way—it protects the old structure from destruction.

Below is a description of the experiment and conclusions for those involved in model training and AI safety.

Read more
Article

Claude Code with Ollama: No Cloud, No Limits

Level of difficultyEasy
Reading time2 min
Reach and readers5.6K

In January 2026, Ollama added support for the Anthropic Messages API, enabling Claude Code to connect directly to any Ollama model. This tutorial explains how to install Claude Code, pull and run local models using Ollama, and configure your environment for a seamless local coding experience.

Read more
News

Cloudflare Universal SSL Vulnerability Assigned NotCVE-2026-0001

Reading time2 min
Reach and readers5.4K

I want to share a critical update regarding the security flaw in Cloudflare's infrastructure that I recently analyzed on my blog and on Ru Habr. The issue is in Universal SSL ignoring CAA records and Account Binding (RFC 8657) has officially entered the coordination phase.

Read more
Article

Subliminal Learning and Structural Inertia: Why Neural Networks Remember What They Should Forget

Level of difficultyEasy
Reading time20 min
Reach and readers4.7K

In my previous article, I explored the phenomenon of subliminal learning, but it raised more questions than answers. It is time to dive deeper. Below, you will find the experiments and the code.

In the fields of AI Alignment and LLM Security, a critical question remains: does fine-tuning or Reinforcement Learning from Human Feedback (RLHF) guarantee the removal of unwanted information?

Spoiler: The experiments demonstrated that the well-known Mode Connectivity effect makes the complete erasure of pre-training information practically impossible during standard fine-tuning. Structural Imprinting persists in the weight topology and can be read through a subliminal channel. Even with full weight unfreezing and aggressive L2 regularization (active forgetting), the latent space topology formed during the pre-training stage persists and determines the solution to the new task with an accuracy of 88–99%.

Read more
Article

PostgreSQL for WMS: a DBMS selection strategy in the era of import substitution

Level of difficultyMedium
Reading time9 min
Reach and readers6.9K

Today we want to talk about choosing a DBMS for WMS not as a dry technical discussion, but as a strategic decision that determines the security, budget, and future flexibility of your business. This is not about "why PostgreSQL is technically better," but about why it has become the only safe, cost-effective, and future-proof solution for Russian warehouse systems in the new reality.

This is not just another database article. It is a roadmap for those who do not want to wake up one day with a paralyzed warehouse and multi-million fines due to a bad decision made yesterday. At INTEKEY we have gone this path deliberately, and today our WMS projects for the largest market players run on PostgreSQL. We know from experience where the pitfalls are and how to avoid them.

Read more
Article

Session Teleportation in Claude Code

Level of difficultyEasy
Reading time4 min
Reach and readers5.1K

Recently, I started using Session Teleportation in Claude Code. It allows you to move an entire conversation, including context, history, and the working branch, between the web and your local terminal.

In this tutorial, I show you how it works and how to use it to make your workflow seamless.

Read more
Article

Apophatic AI: Why Neural Networks Learn Through «NO» and How Synthetic Data Kills Meaning

Level of difficultyEasy
Reading time32 min
Reach and readers4.6K

Modern neural network training often resembles alchemy. We have working recipes, but how exactly a statistical model transforms terabytes of text into understanding remains unclear.

Why is subliminal learning (pattern transmission through noise) possible? Why does training on synthetic data lead to degradation, even when the data appears to be of high quality?

In this article, I propose looking at training architecture from a different angle. The core idea is simple: positive definitions in high-dimensional space are computationally inefficient. A neural network does not learn what an object is. It learns what the object is not, and the model's intelligence depends entirely on the quality of this "NOT."

What follows is the theory, experiments in PyTorch (code included), mathematics, and an explanation of why LLM collapse is highly probable.

Read more
Article

Less routine, more control: PPEM gets smarter

Reading time3 min
Reach and readers8.2K

Bulk config rollouts, built-in OpenTelemetry, and two-click HA cluster control are all part of one goal: making PostgreSQL admin simpler and safer. PPEM 2.3 is a big step toward that — with user-defined presets, a reworked alerting system, and stronger RBAC — helping you bring order to messy configs and trust the system to warn you before things go sideways.

Read more
Article

Codex Skills Deep Dive: Progressive Disclosure, Triggers, and Best Practices

Level of difficultyEasy
Reading time4 min
Reach and readers6.3K

If you are using the Codex CLI and find yourself writing the same instructions over and over again, you are not using the tool to its full potential. Codex offers a powerful feature called Skills that allows you to package reusable workflows and give your AI agent new capabilities on demand. If you want to know more about it, then read this article until the end.

Read more
Post

polluSensWeb now supports 26 sensors and webhooks

polluSensWeb
polluSensWeb

With the latest updates, polluSensWeb now supports 26 different sensors and introduces webhook integration, opening up possibilities for real-time automation, data forwarding, and external analytics pipelines.

These sensors are manufactured by different companies and use different protocols. Some transmit data continuously, while others require start commands.

Until now, polluSensWeb was primarily a visualization and diagnostic tool. The data remained within the browser session. This was convenient for testing, calibration, or demonstration, but it limited real-world use cases.

With webhooks enabled, sensor data can be automatically sent to an external endpoint in real time.
This makes it possible to:

  • Forward measurements to databases

  • Trigger alerts or automation workflows

  • Send data to monitoring dashboards such as Grafana

  • Integrate with community platforms or custom APIs

Project on GitHub
Live deployment
Project on GitHub

Tags:
0
Comments0
Article

Intelligent Systems at Phystech: 2025 Year in Review

Reading time22 min
Reach and readers7.1K

As we wrap up another year, it's time to look back at what our department has accomplished. 2025 brought us 42 published papers spanning fundamental ML theory, applied AI systems, and cutting-edge optimization methods—from transformer Hessians and generative models to hallucination detection and matrix-oriented optimizers.

Beyond publications, our students won competitions and defended their theses: 14 Bachelor's, 9 Master's, 3 PhD, and 1 DSc dissertations. They also launched ambitious group research projects. Three of our faculty and alumni received the prestigious Yandex ML Prize, and our head Konstantin Vorontsov was inducted into the Hall of Fame. If you read our summer overview of thesis defences or last winter's year-in-review for 2024, this post continues that story with the next chapter.

In this year-in-review, we dive into the research highlights, share stories from our educational programs, and celebrate the community that makes it all possible.

Read more
Article

How to speed up mass data inserts in PostgreSQL when using Spring

Level of difficultyHard
Reading time17 min
Reach and readers8.9K

A common task in enterprise systems is to load large volumes of data into PostgreSQL — sometimes tens or even hundreds of millions of rows. At first glance, this seems simple: just write a loop in Java and call save() for every record. But in reality, such an approach can be painfully slow. Even a perfectly tuned PostgreSQL instance won’t help if the application is sending data inefficiently.

This article explains how to significantly accelerate bulk inserts when working with PostgreSQL through Spring and Hibernate. We’ll walk through which Spring and Hibernate settings are worth enabling, why they matter, and how much performance they can actually unlock. We’ll also look at how to build your own data-insertion layer for PostgreSQL — one that lets you switch between different insertion strategies, leverage PostgreSQL’s custom capabilities, and parallelize the process. Finally, we’ll see how to integrate this layer with Spring and what real gains each approach can deliver.

Read more
1
23 ...