Ollama Tutorial: How to Run Local AI Models with Ollama

Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.

Interpreted high-level programming language for general-purpose programming

Ollama has become the standard for running Large Language Models (LLMs) locally. In this tutorial, I want to show you the most important things you should know about Ollama.

In this article, I will show you how to build your first AI agent from scratch using Google’s ADK (Agent Development Kit). This is an open-source framework that makes it easier to create agents, test them, add tools, and even build multi-agent systems.

My name is Anatoly Bobunov, and I work as a Software Development Engineer in Test - or SDET for short - at EXANTE. When I joined one of our projects, I discovered that several of our test suites took more than an hour to run - painfully slow, to the point where running them for every merge request was simply unrealistic. We wanted fast feedback on each commit, but at that speed, it just wasn’t going to happen.
Eventually, through a series of small but precise improvements, I managed to speed things up to 8.5× faster, without rewriting the tests from scratch. In this article, I’ll walk through the bottlenecks we found and how we fixed them.

The Problem: Traditional phishing emails are relatively easy to spot. AI-generated ones are not.
python.

Hello, Habr! Today I want to tell you about my project — “Game Engine 3”, a software shell for creating 2D games and applications...

DocLing in Working with Texts, Languages, and Knowledge — an in-depth overview of the open-source DocLingtoolkit for extracting, structuring, and analyzing data from documents. The article covers approaches to processing multilingual texts, building language- and domain-specific knowledge models, and integrating DocLing into AI and NLP projects. Includes practical examples and recommendations for developers working with large volumes of unstructured data.

Would you like to know which indexes are used frequently or rarely? Which ones aren't used at all? Which tables and indexes are the largest? It's very easy to create visualizations for this. They're both visually appealing and practically useful.

In this tutorial, I’ll explain in simple terms what AI, AI agents, and workflows are, and then I’ll walk you through building your very first AI agent in Python using Google’s Agent Development Kit (ADK). By the end, you’ll understand the differences between these concepts and have a working content-assistant agent you can run from your terminal or a web interface.

Automatic data scraping (parsing) has become an essential practice for developers, analysts, and automation specialists. It is used to extract massive amounts of information from websites—from competitors’ prices and reviews to social media content. To achieve this, numerous “scrapers” have been developed—libraries, frameworks, and cloud services that enable programmatic extraction of web data. Some solutions are designed for rapid parsing of static pages, others for bypassing complex JavaScript navigation, and yet others for retrieving data via APIs.
In this article, I will review the top scraping tools—both open source libraries and commercial SaaS/API services—and compare them according to key metrics: • Speed and scalability; • Ability to bypass anti-bot protections; • Proxy support and CAPTCHA recognition; • Quality of documentation; • Availability of APIs and other important features.

Написал лонгрид на английском о текущем состоянии открытых средств проектирования ASIC-ов. Заодно познакомил англоязычных читателей с практиками шаманов Сибири и фигурой Ивана Сусанина. Упомянул планируемые семинары в Мексике и Армении.
A text on the current state of Open-source ASIC design tools. Includes side discussions of the upcoming hackathons in Mexico and Armenia, Docker and Python, Static Timing Analysis and RISC-V, Siberian shamans and treacherous swamps in Belarus.

Any SEO expert knows the pain of collecting Google keyword data. It’s one thing if you can count all the queries on one hand, but what if they number in the thousands? How do you check the search volume in Google for each keyword? Frankly, once you hit tens of thousands of keywords, it’s enough to make your head spin. You’ll be tempted to reach for outdated, familiar tools, only to find modern reality throwing a curveball: the old formula of Key Collector + Google Ads + a few proxies simply doesn’t cut it anymore. We’re entering a new era, and without direct access to the official API, things get grim and complicated fast.

The explicit reparameterization trick is often used to train various latent variable models due to the ease of calculating gradients of continuous random variables. However, due to its peculiarities, explicit reparameterization trick is not applicable to several important continuous standard distributions, such as mixture, Gamma, Beta and Dirichlet.
An alternative method for calculating reparameterization gradients relies on implicit differentiation of cumulative distribution functions. The implicit reparameterization trick is much more expressive and applicable to a wider class of distributions
This article provides an overview of various reparameterization tricks and announces a new Python library, irt.distributions, for sampling from various distributions using the implicit reparameterization trick.

Are you tired of writing messy and unorganized code that leads to frustration and bugs? You can transform your code from a confusing mess into something crystal clear with a few simple changes. In this article, we'll explore key principles from the book "Clean Code" by Robert C. Martin, also known as Uncle Bob, and apply them to Python. Whether you're a web developer, software engineer, data analyst, or data scientist, these principles will help you write clean, readable, and maintainable Python code.

In this article, we are going to do something really cool: we will build a chatbot using Python and the Gemini API. This will be a web-based assistant and could be the beginning of your own AI project. It's beginner-friendly, and I will guide you through it step-by-step. By the end, you'll have your own AI assistant!
Practically all programming languages are built either on the principle of similarity (to make like this one, only with its own blackjack) or to realize some new concept (modularity, purity of functional calculations, etc.). Or both at the same time.
But in any case, the creator of a new programming language doesn't take his ideas randomly out of thin air. They are still based on his previous experience, obsession with the new concept and other initial settings and constraints.
Is there a minimal set of lexemes, operators, or syntactic constructs that can be used to construct an arbitrary grammar for a modern general-purpose programming language?

Problem
Trendwatching is a powerful tool for driving strategic innovations. It helps to discover new teсhnologies, business models and products, that may be used for idea generation and technology transfer. It is a powerful tool for product managers, business stream managers, top managers and "strategists" and is mostly used on a regular basis.

In the realm of data visualization, where insight meets aesthetics, Matplotlib stands as a towering beacon of versatility and creativity. As one of the most popular plotting libraries in Python, Matplotlib empowers data scientists, analysts, and enthusiasts alike to transform raw data into captivating visual narratives. Let us embark on a journey through the vibrant landscapes of Matplotlib, exploring its features, capabilities, and the artistry it inspires.

I know it can be hard to learn a new programming language. In this article, I want to share my plan with you. It's a way to learn Python in eight weeks using videos, articles, and practice exercises. Exercises are very important because I think the best way to learn is by doing them.
I've created this learning plan for people who don't have much free time. You only need about 30-50 minutes a day and consistency. In my plan, I use the 80/20 principle, which will help you learn the most important things first and improve the rest through practice.
For those who read this article to the end, I have prepared a learning tracking sheet to help you track your progress.