Ollama Tutorial: How to Run Local AI Models with Ollama

Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.

The art of creating computer programs

Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.

There are now so many AI tools for coding that it can be confusing to know which one to pick. Some act as simple helpers (Assistant), while others can do the work for you (Agent). This guide breaks down the top AI coding tools that you should be aware of. We will look at what they do, who they are for, and how much they cost.

Have you ever been in the middle of a long coding session with an AI, only to lose everything because of a network glitch, a dead battery, or an accidental terminal close? It’s frustrating to start over from scratch.

За последние два месяца я написал несколько маленьких GLSL-демо. О первом из них, Red Alp, я написал статью. В ней я подробно расписал весь процесс, поэтому рекомендую прочитать её, если вам незнакома эта сфера.
Мы рассмотрим четыре демо: Moonlight, Entrance 3, Archipelago и Cutie. Но на этот раз я расскажу лишь о паре уроков, которые извлёк из каждого. Мы не будем углубляться во все аспекты, потому что это было бы излишне.

In this article, I will show you how to build your first AI agent from scratch using Google’s ADK (Agent Development Kit). This is an open-source framework that makes it easier to create agents, test them, add tools, and even build multi-agent systems.

Micro frontends: hype or game-changer? How one team cut costs, sped releases, and built scalable apps without chaos.

I’ve been using the Gemini CLI a lot lately for my coding projects. I really like how it helps me work faster right inside my terminal. But when I first started, I didn’t always get the best results. Over time, I’ve learned some simple tricks that make a huge difference. If you use the Gemini CLI, I want to share my top 10 pro tips. If you are ready, then let’s get started.

This tutorial will guide you through the process of integrating OpenAI’s powerful Codex coding agent directly into your Visual Studio Code environment. This tool functions as an AI pair programmer, capable of understanding complex prompts to execute commands, write code, run tests, and even build entire applications from scratch.

If you’re like me and work with multiple AI coding agents, you know the frustration of managing different instruction files. It’s a pain to keep everything updated across various formats. But I’ve got some great news for you. A new, simplified standard has emerged, and it’s called AGENTS.md.

Today I’ll show you how to use ChatGPT-5 in the Cursor IDE and use it to take a messy app and make it much better. We’ll go step-by-step, from turning on GPT-5 model to using it for real coding tasks.

Have you ever wished for an AI assistant right inside your terminal window? Well, your dream has come true because Google just released Gemini CLI. In this tutorial, I'm going to show you everything you need to know about this new open-source AI agent. We'll cover how to use it, the pricing, and some useful tips and tricks. So, if you're ready, let's get started! ;)

A few weeks ago, OpenAI announced that Codex is available for Plus users, and I didn’t miss a chance to try it. And today, I’m excited to share a guide to OpenAI’s Codex. As a developer, I’ve found it to be a powerful and practical tool.

Manual resource management in low level C-style C++ code might be annoying. It's not practical to create good enough RAII wrappers for every single C API you use, but approaches with goto cleanup or loads of nested if (success) hurt readability.
A defer macro to the rescue! The deferred lambda will be executed on scope exit, no matter how it happens: you can return from any point, throw an exception (if allowed), or even use a goto to an outer scope. It is truly zero-cost and doesn't rely on C runtime or standard library, so it can be used even in kernel development.

In this tutorial, I’ll walk you through everything I’ve learned about using Google Jules — an asynchronous coding agent. I’ve kept the explanations clear and simple, so whether you're an experienced developer or a beginner, you’ll be able to follow along. By the end, you should feel confident working with Jules: assigning tasks, reviewing its output, and making the most of its capabilities. Ready? Let’s dive in. ;)

I’ve recently been playing with Agent Mode in VS Code, which looks promising. If you’re using VS Code and want to give your development a turbo boost, you’ll want to hear about this.

This article is based on the experience of developing the memsafe library, which, using the Clang plugin, adds safe memory management and invalidation control of reference data types to C++ during source code compilation.

In this tutorial, I’ll explain in simple terms what AI, AI agents, and workflows are, and then I’ll walk you through building your very first AI agent in Python using Google’s Agent Development Kit (ADK). By the end, you’ll understand the differences between these concepts and have a working content-assistant agent you can run from your terminal or a web interface.

I recently tried using Firebase Studio, and it has been an interesting experience that I want to share with you. It's a free, browser-based tool from Google that allows you to build full-stack web apps with AI assistance. Want to know more? Then read this article until the end.

This world needs a new theory — a theory that could describe all the theories on the planet. A theory that could easily describe philosophy, mathematics, physics, and psychology. The one that makes all kinds of sciences computable.
This is exactly what we are working on. If we succeed, this theory will become the unified meta-theory of everything.
A year has passed since our last publication, and our task is to share the progress with our English-speaking audience. This is still not a stable version; it’s a draft. Therefore, we welcome any feedback, as well as your participation in the development of the links theory.
As with everything we have done before, the links theory is published and released into the public domain — it belongs to humanity, that means, it is yours. This work has many authors, but the work itself is far more important than any specific authorship. We hope that today it can become useful to more people.
We invite you to become a part of this exciting adventure.

The most common types of software bugs are memory management bugs. And very often they lead to the most tragic consequences. There are many types of memory bugs, but the only ones that matter now are memory leaks due to circular references, when two or more objects directly or indirectly refer to each other, causing the RAM available to the application to gradually decrease because it cannot be freed.
Memory leaks due to circular references are the most difficult to analyze, while all other types have been successfully solved for a long time. All other memory bugs can be solved at the programming language level (for example, with garbage collectors, borrow checking or library templates), but the problem of memory leaks due to circular references remains unsolved to this day.
But it seems to me that there is a very simple way to solve the problem of memory leaks due to circular references in a program, which can be implemented in almost any typed programming language, of course, if you do not use the all-permissive keyword unsafe for Rust or std::reinterpret_cast in the case of C++.