Pull to refresh
445.33

Machine learning *

The basis of artificial intelligence

Show first
Rating limit
Level of difficulty

How to Learn Python FREE in 8-Week: The 80/20 Learning Plan

Level of difficulty Easy
Reading time 6 min
Views 621

I know it can be hard to learn a new programming language. In this article, I want to share my plan with you. It's a way to learn Python in eight weeks using videos, articles, and practice exercises. Exercises are very important because I think the best way to learn is by doing them.

I've created this learning plan for people who don't have much free time. You only need about 30-50 minutes a day and consistency. In my plan, I use the 80/20 principle, which will help you learn the most important things first and improve the rest through practice.

For those who read this article to the end, I have prepared a learning tracking sheet to help you track your progress.

Read more
Total votes 1: ↑1 and ↓0 +1
Comments 0

3. Information theory and ML. Forecast

Reading time 31 min
Views 490

In this third part, we will discuss Machine Learning, specifically the prediction task in the context of information theory.

The concept of Mutual Information (MI) is related to the prediction task. In fact, the prediction task can be viewed as the problem of extracting information about the signal from the factors. Some part of the information about the signal is contained in the factors. If you write a function that calculates a value close to the signal based on the factors, then this will demonstrate that you have been able to extract MI between the signal and the factors.

Read more
Rating 0
Comments 0

2. Information Theory + ML. Mutual Information

Reading time 11 min
Views 652

In Part 1, we became familiar with the concept of entropy.

In this part, we will delve into the concept of Mutual Information, which opens doors to error-resistant coding, compression algorithms, and offers a fresh perspective on regression and Machine Learning tasks.

It is an essential component that will pave the way, in the next section, for tackling Machine Learning problems as tasks of extracting mutual information between features and the predicted variable.

Here, there will be three interesting and crucial visualizations.

The first one will visualize entropy for two random variables and their mutual information.
The second one will shed light on the very concept of dependency between two random variables, emphasizing that zero correlation does not imply independence.
The third one will demonstrate that the bandwidth of an information channel has a straightforward geometric interpretation through the convexity measure of the entropy function.

In the meantime, we will prove a simplified version of the Shannon-Hartley theorem regarding the maximum bandwidth of a noisy channel. Let's dive in!

Read more
Total votes 2: ↑2 and ↓0 +2
Comments 0

1. Information theory + ML. Entropy

Reading time 10 min
Views 927

I've long wanted to create educational materials on the topic of Information Theory + Machine Learning. I found some old drafts and decided to polish them up here, on Habr.

Information Theory and Machine Learning seem to me like an interesting pair of fields, the deep connection between which is often unknown to ML engineers, and whose synergy has not yet been fully revealed.

Let's start with basic concepts like Entropy, Information in a message, Mutual Information, and channel capacity. Next, there will be materials on the similarity between tasks of maximizing Mutual Information and minimizing Loss in regression problems. Then there will be a section on Information Geometry: Fisher metric, geodesics, gradient methods, and their connection to Gaussian processes (moving along the gradient using SGD is moving along the geodesic with noise).

It's also necessary to touch upon AIC, Information Bottleneck, and discuss how information flows in neural networks – Mutual Information between layers (Information Theory of Deep Learning, Naftali Tishby), and much more. It's not certain that I'll be able to cover everything listed, but I'll try to get started.

Read more
Total votes 3: ↑3 and ↓0 +3
Comments 0

Machine Learning for price optimization

Level of difficulty Medium
Reading time 27 min
Views 5.5K

This is a translated and adopted article I wrote for the Aha'22 (30 May 2022) conference. It describes an approach to a marketplace prices optimisation. Here I've outlined some important definitions and tried to define the scopes and roles of ML, algorithms, and humans in optimal pricing. Although the article covers rather basic things, still, you can find out some new formulas and ideas, because these basics are somewhat "well-known only in a very closed clubs", and besides, the real gem found here is the detailed recipe for ML engineers how to build optimal pricing systems.

Read more
Total votes 3: ↑3 and ↓0 +3
Comments 0

GNU radio 802.11 black box optimization

Level of difficulty Medium
Reading time 5 min
Views 2.1K

In this post I'll share my experience in adjustment of WiFi physical channel. The channel was implemented on a software defined radio (SDR) platform. WiFi looks like a very complicated thing standardized over hundreds of pages. Could a non-expert with a PC and a couple of 100$ devices (HackRFs) somehow improve it? Here I try to develop a WiFi optimization approach basically agnostic of protocol implementation details. There's some math and Python programming in it.

Read more
Total votes 5: ↑5 and ↓0 +5
Comments 0

Data Phoenix Digest — ISSUE 2.2023

Reading time 2 min
Views 945

Video recording of our webinar about dstack and reproducible ML workflows, AVL binary tree operations, Ultralytics YOLOv8, training XGBoost, productionize ML models, introduction to forecasting ensembles, domain expansion of image generators, Muse, X-Decoder, Box2Mask, RoDynRF, AgileAvatar and more.

Read more
Total votes 1: ↑1 and ↓0 +1
Comments 0

Building a GPT-like Model from Scratch with Detailed Theory and Code Implementation

Reading time 14 min
Views 33K

Unlock the power of Transformer Neural Networks and learn how to build your own GPT-like model from scratch. In this in-depth guide, we will delve into the theory and provide a step-by-step code implementation to help you create your own miniGPT model. The final code is only 400 lines and works on both CPUs as well as on the GPUs. If you want to jump straight to the implementation here is the GitHub repo.

Transformers are revolutionizing the world of artificial intelligence. This simple, but very powerful neural network architecture, introduced in 2017, has quickly become the go-to choice for natural language processing, generative AI, and more. With the help of transformers, we've seen the creation of cutting-edge AI products like BERT, GPT-x, DALL-E, and AlphaFold, which are changing the way we interact with language and solve complex problems like protein folding. And the exciting possibilities don't stop there - transformers are also making waves in the field of computer vision with the advent of Vision Transformers.

Read more
Total votes 25: ↑25 and ↓0 +25
Comments 1

InvokeAI 2.2: UI Outpainting, Embedding Management and more

Reading time 2 min
Views 6K

InvokeAI 2.2 is now available to everyone. This update brings in exciting features, like UI Outpainting, Embedding Management and more. See highlighted updates below, or the full release notes for everything included in the release.

What’s new?
Total votes 5: ↑5 and ↓0 +5
Comments 2

I trained a neural network on my drawings and give the model for free (and teach you to create your own)

Reading time 2 min
Views 3.2K

Great for seamless patterns, abstract drawings, and watercolor-styled images. How to use it and train a neural network on your own pictures?

Download the model here: https://huggingface.co/netsvetaev/netsvetaev-free

I wanna know!
Total votes 6: ↑6 and ↓0 +6
Comments 0

How Yandex Made Their Biggest Improvement in the Search Engine with the Help of Toloka

Reading time 5 min
Views 2K

Toloka is a crowdsourcing platform and microtasking project launched by Yandex to quickly markup large amounts of data. But how can such a simple concept play a crucial role in improving the work of neural networks?

Learn how
Total votes 1: ↑1 and ↓0 +1
Comments 0

Color image capturing device with pseudorandom patterns sets

Reading time 4 min
Views 686

The present invention relates to an analog signal capturing devices generally and monochrome or color image capture sensors, such as a scanner or a Charge-Coupled-Device (“CCD”) for video and photo camera in particular, which are almost free from moiré and aliasing. The present invention relates to methods for enhancing the resolution of an image capture device and device for digital color/grey image displaying also.

Read more
Total votes 1: ↑1 and ↓0 +1
Comments 1

FL_PyTorch is publicly available on GitHub

Reading time 2 min
Views 1.2K

FL_PyTorch: Optimization Research Simulator for Federated Learning is publicly available on GitHub.

FL_PyTorch is a suite of open-source software written in python that builds on top of one of the most popular research Deep Learning (DL) frameworks PyTorch. We built FL_PyTorch as a research simulator for FL to enable fast development, prototyping, and experimenting with new and existing FL optimization algorithms. Our system supports abstractions that provide researchers with sufficient flexibility to experiment with existing and novel approaches to advance the state-of-the-art. The work is in proceedings of the 2nd International Workshop on Distributed Machine Learning DistributedML 2021. The paper, presentation, and appendix are available in DistributedML’21 Proceedings (https://dl.acm.org/doi/abs/10.1145/3488659.3493775).

The project is distributed in open source form under Apache License Version 2.0. Code Repository: https://github.com/burlachenkok/flpytorch.

To become familiar with that tool, I recommend the following sequence of steps:

Read more
Total votes 1: ↑0 and ↓1 -1
Comments 0

Metaverses: hype or the future to come?

Reading time 5 min
Views 1.5K

Alexander Volchek, IT entrepreneur, CEO educational platform GeekBrains

Pretty much everyone in the IT community is talking metaverses, NFTs, blockchain and cryptocurrency. This time we will discuss metaverses, and come back to everything else in the letters to follow. Entrepreneurs and founders of tech giants are passionate about this idea, and investors are allocating millions of dollars for projects dealing with metaverses. Let's start with the basics.

Read more
Total votes 2: ↑1 and ↓1 0
Comments 0

What are neural networks and what do we need them for?

Reading time 4 min
Views 3.8K

Explaining through simple examples

For a long time, people have been thinking on how to create a computer that could think like a person. The advent of artificial neural networks is a significant step in this direction. Our brain consists of neurons that receive information from sensory organs and process it: we recognize people we know by their faces, and we feel hungry when we see delicious food. All of this is the result of brain neurons working and interacting with each other. This is also the principle that artificial neural networks are based on, simulating the processes occurring in the human brain.

What are neural networks

Artificial neural networks are a software code that imitates the work of a brain and is capable of self-learning. Like a biological network, an artificial network also consists of neurons, but they have a simpler structure.

If you connect neurons into a sufficiently large network with controlled interaction, they will be able to perform quite complex tasks. For example, determining what is shown in a picture, or independently creating a photorealistic image based on a text description.

Read more
Total votes 1: ↑1 and ↓0 +1
Comments 1

Multilingual Text-to-Speech Models for Indic Languages

Reading time 5 min
Views 2.3K

In this article, we shall provide some background on how multilingual multi-speaker models work and test an Indic TTS model that supports 9 languages and 17 speakers (Hindi, Malayalam, Manipuri, Bengali, Rajasthani, Tamil, Telugu, Gujarati, Kannada).

It seems a bit counter-intuitive at first that one model can support so many languages and speakers provided that each Indic language has its own alphabet, but we shall see how it was implemented.

Also, we shall list the specs of these models like supported sampling rates and try something cool – making speakers of different Indic languages speak Hindi. Please, if you are a native speaker of any of these languages, share your opinion on how these voices sound, both in their respective language and in Hindi.

Read more
Total votes 2: ↑2 and ↓0 +2
Comments 0

Text-based CAPTCHA in 2022

Reading time 7 min
Views 4.5K

The first text-based CAPTCHA ( we’ll call it just CAPTCHA for the sake of brevity ) was used in 1997 by AltaVista search engine. It prevented bots from adding Uniform Resource Locator (URLs) to their web search engine.

Back then it was a decent defense measure. However the progress can't be stopped, and this defense was bypassed using OCR available at those times (for example FineReader).

CAPTCHA became more complex, noise was added to it, along with distortions, so the popular OCRs couldn’t recognize this text. And then OCRs custom made for this task appeared. It costed extra money and knowledge for the attacking side. The CAPTCHA developers were required to understand the challenges the attackers met, what distortions to add, in order to make the automation of the CAPTCHA recognition more complex.

The misunderstanding of the principles the OCRs were based on, some CAPTCHAs were given such distortions, that they were more of a hassle for regular users than for a machine.

OCRs for different types of CAPTCHAs were made using heuristics, and the most complicated part of it was the CAPTCHA segmentation for the stand along symbols, that subsequently could be easily recognized by the CNN (for example LeNet-5), also SVM showed a good result even on the raw pixels.

In this article I’ll try to grasp the whole history of CAPTCHA recognition, from heuristics to the contemporary automated recognition systems. We’ll figure out, if a CAPTCHA is still alive.

I’ll review the yandex.com CAPTCHA. The Russian version of the same CAPTCHA is more complex.

Read more
Total votes 4: ↑3 and ↓1 +2
Comments 0

Our new public speech synthesis in super-high quality, 10x faster and more stable

Reading time 3 min
Views 4.1K

hero_image


In our last article we made a bunch of promises about our speech synthesis.


After a lot of hard work we finally have delivered upon these promises:


  • Model size reduced 2x;
  • New models are 10x faster;
  • We added flags to control stress;
  • Now the models can make proper pauses;
  • High quality voice added (and unlimited "random" voices);
  • All speakers squeezed into the same model;
  • Input length limitations lifted, now models can work with paragraphs of text;
  • Pauses, speed and pitch can be controlled via SSML;
  • Sampling rates of 8, 24 or 48 kHz are supported;
  • Models are much more stable — they do not omit words anymore;

This is a truly break-through achievement for us and we are not planning to stop anytime soon. We will be adding as many languages as possible shortly (the CIS languages, English, European languages, Hindic languages). Also we are still planning to make our models additional 2-5x faster.


We are also planning to add phonemes and a new model for stress, as well as to reduce the minimum amount of audio required to train a high-quality voice to 5 — 15 minutes.


As usual you can try our model in our repo or in colab.

Read more →
Total votes 13: ↑13 and ↓0 +13
Comments 0

ruDALL-E: Generating Images from Text. Facing down the biggest computational challenge in Russia

Reading time 11 min
Views 10K

Multimodality has led the pack in machine learning in 2021. Neural networks are wolfing down images, text, speech and music all at the same time.  OpenAI is, as usual, top dog, but as if in defiance of their name, they are in no hurry to share their models openly.  At the beginning of the year, the company presented the DALL-E neural network, which generates 256x256 pixel images in answer to a written request.  Descriptions of it can be found as articles on arXiv and examples on their blog.  

As soon as DALL-E flushed out of the bushes, Chinese researchers got on its tail.  Their open-source CogView neural network does the same trick of generating images from text.  But what about here in Russia? One might say that “investigate, master, and train” is our engineering motto.  Well, we caught the scent, and today we can say that we created from scratch a complete pipeline for generating images from descriptive textual input written in Russian.

In this article we present the ruDALL-E XL model, an open-source text-to-image transformer with 1.3 billion parameters as well as ruDALL-E XXL model, an text-to-image transformer with 12.0 billion parameters which is available in DataHub SberCloud, and several other satellite models.

Read more
Total votes 3: ↑3 and ↓0 +3
Comments 4

Authors' contribution