• The color of the Moon and the Sun from space in terms of RGB and color temperature

      It would seem that the question of the color of the Moon and the Sun from space for modern science is so simple that in our century there should be no problem at all with the answer. We are talking about colors when observing precisely from space, since the atmosphere causes a color change due to Rayleigh light scattering. «Surely somewhere in the encyclopedia about this in detail, in numbers it has long been written,» you will say. Well, now try searching the Internet for information about it. Happened? Most likely no. The maximum that you will find is a couple of words about the fact that the Moon has a brownish tint, and the Sun is reddish. But you will not find information about whether these tints are visible to the human eye or not, especially the meanings of colors in RGB or at least color temperatures. But you will find a bunch of photos and videos where the Moon from space is absolutely gray, mostly in photos of the American Apollo program, and where the Sun from space is depicted white and even blue.

      Especially my personal opinion is nothing but a consequence of the intervention of politics in science. After all, the colors of the Moon and the Sun from space directly relate to the flights of Americans to the Moon.

      I searched through many scientific articles and books in search of information about the color of the Moon and the Sun from space. Fortunately, it turned out that even though they do not have a direct answer to RGB, there is complete information about the spectral density of the solar radiation and the reflectivity of the Moon across the spectrum. This is quite enough to get accurate colors in RGB values. You just need to carefully calculate what, in fact, I did. In this article I will share the results of calculations with you and, of course, I will tell you in detail about the calculations themselves. And you will see the Moon and the Sun from space in real colors!
      Read more →
    • How we made landmark recognition in Cloud Mail.ru, and why



        With the advent of mobile phones with high-quality cameras, we started making more and more pictures and videos of bright and memorable moments in our lives. Many of us have photo archives that extend back over decades and comprise thousands of pictures which makes them increasingly difficult to navigate through. Just remember how long it took to find a picture of interest just a few years ago.

        One of Mail.ru Cloud’s objectives is to provide the handiest means for accessing and searching your own photo and video archives. For this purpose, we at Mail.ru Computer Vision Team have created and implemented systems for smart image processing: search by object, by scene, by face, etc. Another spectacular technology is landmark recognition. Today, I am going to tell you how we made this a reality using Deep Learning.
        Read more →
      • Automatic respiratory organ segmentation

          Manual lung segmentation takes about 10 minutes and it requires a certain skill to get the same high-quality result as with automatic segmentation. Automatic segmentation takes about 15 seconds.


          I assumed that without a neural network it would be possible to get an accuracy of no more than 70%. I also assumed, that morphological operations are only the preparation of an image for more complex algorithms. But as a result of processing of those, although few, 40 samples of tomographic data on hand, the algorithm segmented the lungs without errors. Moreover, after testing in the first five cases, the algorithm didn’t change significantly and correctly worked on the other 35 studies without changing the settings.


          Also, neural networks have a disadvantage — for their training we need hundreds of training samples of lungs, which need to be marked up manually.


          Read more →
        • AI-Based Photo Restoration



            Hi everybody! I’m a research engineer at the Mail.ru Group computer vision team. In this article, I’m going to tell a story of how we’ve created AI-based photo restoration project for old military photos. What is «photo restoration»? It consists of three steps:

            • we find all the image defects: fractures, scuffs, holes;
            • we inpaint the discovered defects, based on the pixel values around them;
            • we colorize the image.

            Further, I’ll describe every step of photo restoration and tell you how we got our data, what nets we trained, what we accomplished, and what mistakes we made.
            Read more →
          • Dog Breed Identifier: Full Cycle Development from Keras Program to Android App. on Play Market

              With the recent progress in Neural Networks in general and image Recognition particularly, it might seem that creating an NN-based application for image recognition is a simple routine operation. Well, to some extent it is true: if you can imagine an application of image recognition, then most likely someone have already did something similar. All you need to do is to Google it up and to repeat.

              However, there are still countless little details that… they are not insolvable, no. They simply take too much of your time, especially if you are a beginner. What would be of help is a step-by-step project, done right in front of you, start to end. A project that does not contain «this part is obvious so let's skip it» statements. Well, almost :)

              In this tutorial we are going to walk through a Dog Breed Identifier: we will create and teach a Neural Network, then we will port it to Java for Android and publish on Google Play.

              For those of you who want to see a end result, here is the link to NeuroDog App on Google Play.

              Web site with my robotics: robotics.snowcron.com.
              Web site with: NeuroDog User Guide.

              Here is a screenshot of the program:

              image

              Read more →
              • +11
              • 12.4k
              • 6