Pull to refresh
1245.63

Artificial Intelligence

AI, ANN and other forms of an artificial Intelligence

Show first
Rating limit
Level of difficulty

The Romantics at Anthropic: Why Researchers Talk About LLMs as if They Were Human

Level of difficultyEasy
Reading time7 min
Reach and readers1.4K

In my previous article, I showed how researchers confused being 'aware' (signal registration) with being 'conscious' (subjective awareness). But this is no accident — it is part of a narrative being constructed by AI labs. Anthropic is leading this trend. Let’s break down their latest paper, where a "learned pattern" has suddenly turned into "malicious intent."

Read more

Confusing 'Aware' with 'Conscious': Did Researchers Uncover Subjective Experience in LLMs?

Level of difficultyEasy
Reading time12 min
Reach and readers1.9K

Imagine this scenario: You ask an AI system, "Are you conscious?" and it answers, "No." You then disable its "capacity to lie" — and it suddenly starts answering, "Yes." The conclusion seems tempting: the model was lying the whole time, hiding its true internal state.

This is the core logic presented in a recent arXiv paper. But what if the researchers didn't disable "deception," but something else entirely? Let’s break down where the interpretation might have diverged from the technical reality — and why this specific oversight is typical in discussions regarding LLM "consciousness."

Read more

Gemini CLI Best Practices – Practical Examples

Level of difficultyEasy
Reading time4 min
Reach and readers12K

I’ve been using the Gemini CLI a lot lately for my coding projects. I really like how it helps me work faster right inside my terminal. But when I first started, I didn’t always get the best results. Over time, I’ve learned some simple tricks that make a huge difference. If you use the Gemini CLI, I want to share my top 10 pro tips. If you are ready, then let’s get started.

Read more

The LLM's Narrative Engine: A Critique of Prompting

Level of difficultyEasy
Reading time8 min
Reach and readers5.8K

In a previous article, I proposed the holographic hypothesis: an LLM isn't a database of facts, but an interference field—a landscape of probabilities shaped by billions of texts. But a static landscape is just potential. How does the model actually move through it? How does it choose one specific answer from infinite possibilities?

This is where the Narrative Engine comes in. If the holographic hypothesis describes the structure of an LLM's "mind," the narrative engine hypothesis describes its dynamics. It is the mechanism that drives the model, forcing its probabilistic calculations to follow the coherent pathways of stories. This article critiques modern prompting techniques through this new lens, arguing that we are not programming a machine, but initiating a narrative.

Read more

LLM as a Resonance-Holographic Field of Meanings

Level of difficultyEasy
Reading time14 min
Reach and readers8.3K

Alright. I pose the same question to an LLM in various forms. And this statistical answer generator, this archive of human knowledge, provides responses that sometimes seem surprisingly novel, and other times, derivative and banal.

On Habr, you'll find arguments that an LLM is incapable of novelty and creativity. And I'm inclined to agree.
You'll also find claims that it shows sparks of a new mind. And, paradoxically, I'm inclined to agree with that, too.

The problem is that we often try to analyze an LLM as a standalone object, without fully grasping what it is at its core. This article posits that the crucial question isn't what an LLM knows or can do, but what it fundamentally is.

Read more

Emotions and Qualia: A New Approach

Level of difficultyEasy
Reading time6 min
Reach and readers13K

At last, we arrive at qualia and emotions. Many of you will immediately think of Chalmers, the bat, redness, and zombies. Excellent. We can consider that ground covered.

Today, I will discuss a topic that seems distant from IT but, with each new breakthrough in AI, becomes ever more immediate: consciousness. It seems I speak of little else. So, to be precise, I will discuss its "hard problem": why do we experience at all? Why does the color red (and there’s the redness) feel red, and pain feel like pain?

This subjective, ineffable aspect of experience — the "what it is like" — is what philosophy calls qualia. For decades, it has been a dead end for scientists. But what if we're looking in the wrong direction? What if qualia are not an additional layer to computation, but an inherent property of the very architecture of computation?

Read more

AI slop coding, or How to build ridiculously long attack chains with AI

Level of difficultyEasy
Reading time7 min
Reach and readers15K

While researching malware used by attacker groups, we came across a series of unusual attacks that used GitHub repositories to store malicious files and victim data. These campaigns appear targeted rather than large-scale, and it seems the attackers relied heavily on AI during development. The earliest activity we traced was in September 2024, and the most recent in April 2025.

Our Threat Intelligence team investigates complex attacks featuring novel persistence and data collection methods and unique infrastructures. Sometimes we find simple two-line scripts, and other times we run into "bombs" that trigger dozens of different payloads at once. But it's pretty rare for us to come across such long chains of really simple AI-written scripts that still work, tied together in a way that clearly wasn't random. Think of this as an APT-style attack implemented at the "script kiddie" level (a derogatory term in hacker culture for those who rely on scripts or programs written by others).

Read more

Give Your AI Agent Sight: Integrating Chrome DevTools with MCP

Level of difficultyEasy
Reading time3 min
Reach and readers29K

Hey everyone! I’m excited to share something that’s a real game-changer for anyone who writes code for the web. I’m talking about the new Chrome DevTools Model Context Protocol (MCP) server. If you want to know more details, read the article until the end.

Read more

Why LLMs Drift into Convincing Nonsense (And a Practical Solution)

Level of difficultyMedium
Reading time14 min
Reach and readers20K

Imagine you have an idea powerful enough to change the world. Your tool of choice is a state-of-the-art LLM, ready to help you formalize the problem, generate hypotheses, and synthesize a solution. What you receive is a construct that is internally logical, elegant, and coherent... yet completely wrong. It's a mix of established facts, model-generated hallucinations, and your own subtle biases. With no way to test it in practice or design a clean experiment, the entire endeavor suddenly starts to look like sophisticated nonsense.

So, what went wrong along the way? From the very first prompt, the model doesn't truly "understand" your ambiguous intent. Instead, it steers you towards a formulation that fits its familiar and computationally cheap patterns. This guidance happens through clarifying questions and structured options, essentially funneling you down one of its predefined "corridors." This behavior isn't driven by any explicit "will" of the model; it's an emergent consequence of probabilistic optimization—minimizing prediction error. For the system, a structured, predictable dialogue is both optimal and safe. This aligns perfectly with the developers' goals: it's cheaper, more stable, and most users are satisfied with quick, template-based answers.

The result is that mathematical efficiency serves engineering and commercial objectives. There is no systemic incentive to combat the AI's tendency to reduce a complex problem to a simple, "cheap" answer. It's profitable for developers, economical for the model, and often, the user doesn't even know what an "ideal" answer would look like.

Read more

Building a Resume Matcher with tRPC, NLP, and Vertex AI

Level of difficultyEasy
Reading time6 min
Reach and readers15K

I share how I built a resume matcher app using tRPC, TypeScript, and Google Vertex AI. The project takes PDF resumes and job postings, extracts text, applies basic NLP for skill detection, and then calls Gemini 1.5 Flash for deeper analysis. Along the way, I explain why tRPC felt faster and cleaner than REST or GraphQL for an MVP, show code snippets from the repo, and discuss both the benefits and trade-offs of this approach.

Read more

START: how to defeat hallucinations and teach LLMs accurate calculations

Level of difficultyEasy
Reading time3 min
Reach and readers11K

START is an open-source LLM designed for precise calculations and code verification. It addresses two major issues that most standard models face: hallucinations and errors in multi-step calculations. This article explains why these problems arise and how START solves them.

Read more

OpenAI's Codex CLI Agent: The Complete VS Code Setup Guide

Level of difficultyEasy
Reading time3 min
Reach and readers16K

This tutorial will guide you through the process of integrating OpenAI’s powerful Codex coding agent directly into your Visual Studio Code environment. This tool functions as an AI pair programmer, capable of understanding complex prompts to execute commands, write code, run tests, and even build entire applications from scratch.

Read more

AGENTS.md: The README for Your AI Agent

Level of difficultyEasy
Reading time3 min
Reach and readers13K

If you’re like me and work with multiple AI coding agents, you know the frustration of managing different instruction files. It’s a pain to keep everything updated across various formats. But I’ve got some great news for you. A new, simplified standard has emerged, and it’s called AGENTS.md.

Read more

Docling in Working with Texts, Languages, and Knowledge

Level of difficultyMedium
Reading time20 min
Reach and readers11K

DocLing in Working with Texts, Languages, and Knowledge — an in-depth overview of the open-source DocLingtoolkit for extracting, structuring, and analyzing data from documents. The article covers approaches to processing multilingual texts, building language- and domain-specific knowledge models, and integrating DocLing into AI and NLP projects. Includes practical examples and recommendations for developers working with large volumes of unstructured data.

Read more

The Great Extinction: How AI is Destroying the Internet

Level of difficultyEasy
Reading time8 min
Reach and readers13K

We are living through an ecological catastrophe. Only this one isn't happening in the Amazon rainforest, but in the digital ecosystem of the internet.

AI assistants have become the apex predators of the digital savannah. They are radically reshaping the entire ecosystem in their own image: instead of antelopes and zebras, information sites are going extinct. Instead of hyenas and jackals, content aggregators are disappearing. In place of a once-rich ecosystem of knowledge, a digital desert of entertainment is all that remains.

Read more
1
23 ...

Authors' contribution