411.79

# Mathematics *

Mother of all sciences

Show first
Rating limit
Level of difficulty

## The Collatz conjecture is the coolest math trick of all time

4 min
219

On the Internet and in non-fiction literature you can often find various mathematical tricks. The Collatz conjecture leaves all such tricks behind. At first glance, it may seem like some kind of a trick with a catch. However, there is no catch. You think of a number and repeat one of two arithmetic operations for it several times. Surprisingly, the result of these actions will always be the same. Or, may be not always?

+4

## Let's start in GameDev

2 min
952

I just started to learn Game Development, and decided to run write my personal blog about it. So there you can find information(resources, blogs, courses, books) that i've gathered and my personal problems with learning)

+1

## Proof's by induction using Rust's type-system

5 min
1.1K

Rust's type system is quite powerful as it allows to encode complex relationships between user-defined types using recursive rules that are automatically applied by the compiler. Idea behind this post is to use some of those rules to encode properties of our domain. Here we take a look at Peano axioms defined for natural numbers and try to derive some of them using traits, trait bounds and recursive `impl` blocks. We want to make the compiler work for us by verifying facts about our domain, so that we could invoke the compiler to check whether a particular statement holds or not. Our end goal is to encode natural numbers as types and their relationships as traits such that only valid relationships would compile. (e.g. in case we define types for 1 and 3 and relationship of less than, 1 < 3 should compile but 3 < 1 shouldn't, that all would be encoded using Rust's language syntax of course)

Let's define some natural numbers on the type level first.

+6

## The On-Line Encyclopedia of Integer Sequences today

14 min
890

You can encounter integer sequences all around combinatorics, number theory, and recreational mathematics. And if there is a multitude of objects of the similar form, then one can create an index for these objects. The On-Line Encyclopedia of Integer Sequences, OEIS, is such an index.

This is a translation of my article The On-Line Encyclopedia of Integer Sequences in 2021, published in Mat. Pros. Ser. 3 28, 199–212 (2021).

This article covers the On-Line Encyclopedia inclusion criteria, its editorial process, its role in mathematics, and its future.

0

## Methodology for calculating results of a task set: taking into account its level of difficulty

3 min
1.6K

In the world of academic knowledge evaluation, objective calculation of large data presents a serious problem. Can a student studying in an Advanced Maths class and getting B-marks be evaluated equally with another student, getting B-marks in a General Maths class? Can we create a system that would take into account the level of difficulty those students face?

This article will describe a system of independent evaluation we have been using for school olympics in five subjects (Mathematics, English Language, Russian Language, Tatar Language, Social Science) for students grades 1 to 11. In each academic year we organise six qualification tournaments, with about 15,000 students from different regions of Russia. Then we select the top ten participants in each subject and each grade for their future participation in the final (seventh) tournament, where only the best of the best are chosen. It means that 550 participants compete in the final tournament, which is about 5.5% of all participants in the academic year.

It is obvious that those multiple tournaments cannot be absolutely homogenous, and inevitably the levels of difficulty for each set of tasks vary. Therefore, it is critical for us to take into consideration those variations of difficulty and calculate the results in the most objective manner.

0

## Let’s Discuss the Lorentz Transforms – Part the Last: The Real Derivation, or The Nail in the Casket

9 min
695

In this post there are a lot of references to the previous one – it is essential that you read it before getting down to this.

In my previous posts (see the list below below) I tried to express my doubts whether there is a real physical substrate to the Lorentz transforms. The assumptions about the constancy of the speed of light, the homogeneity of space-time, and the principle of relativity do not and cannot lead to the deduction of the Lorentz transforms – Einstein himself, for one, gets quite different transforms, and from those he goes over directly to the Lorentz transforms obviously missing a logical link (see Einstein p. 7, and also Part 1 of this discussion). As for the light-like interval being equal to zero, we saw that it can be attached to such assumptions only in error and cannot in itself be a foundation of a theory. I have to conclude that all that fine, intricately latticed construction of scientifictitious, physics-like arguments with the air of being profound is nothing but a smokescreen creating the appearance of a physical foundation while there is none.

What is then the real foundation of the Lorentz transforms? Let’s start from the rear end, the Minkowski mathematics. Historically, this appeared later than special relativity as a non-contradictory model of the Lorentz mathematical world; previously mentioned Varićak was among those who took part in its creation. Notwithstanding its coming later in history, it can be used as the starting point for derivation of the Lorentz transforms.

+3

## Let’s Discuss the Lorentz Transforms – Intermission: Rapidity, and What it Means

4 min
535

I thought my previous post rather funny, and was surprised seeing it initially receive so few views. I thought the entertainment flopped, but fortunately I was wrong. I therefore feel it my duty before my readers to address the subject of the Landau & Lifschitz proof of the invariance of the interval.

You can find the summary of it in Wikipedia. Making their starting point the light-like interval always being equal to zero, Landau & Lifschitz seem to make a great fuss about it. The Wikipedia article even says: ‘This is the immediate mathematical consequence of the invariance of the speed of light.’ No, it is not.

I beg everyone’s pardon, but the light-like interval always being equal to zero is nothing else but the following statement: ‘The length of a ray of light will always be equal to the length of this ray of light’. Sounds like a cool story, bros and sis, but I cannot see what further inferences can be drawn from it. The ‘proof’ of this truism cannot fail under any circumstances whatever – whether you keep the speed of light invariant, or keep or change the metric of space or time or both – or make both metric and speed of light change – the light-like interval will remain equal to zero. I am okay with anyone wanting to prove it if they feel like it, but you cannot make it an ‘immediate mathematical consequence of the invariance of the speed of light’. Neither is it possible to make the constancy of the speed of light a consequence of the invariance of the light-like interval for the reason already mentioned: this is a truism. It does not prove anything, nor can it be a consequence of anything. When Landau & Lifschitz insist that this is a consequence of the constancy of the speed of light, that is either an error or a downright subterfuge, a means employed to create a spectre of logical connection between two unconnected notions, and charge this ghostly connection with pretended significance. And, since the following proof of invariance of an arbitrary interval hangs on the invariance of the light-like interval, we can altogether dismiss it: the necessity of introduction of such a measure as interval cannot be derived from the statement that a length of something will be equal to itself in whatever frame of reference it is measured.

+3

## Let’s Discuss the Lorentz Transforms – Part 2: The Equation of the Sphere, or Is It?

5 min
667

The previous discussion done, we have surmounted the difficult waters and are now sailing into something much more pleasure-like and hopefully even entertaining.

As I promised, we will be discussing the invariance of the interval, that is to say, the following relation:

+3

## Let’s Discuss the Lorentz Transforms – Part 1: Einstein’s 1905 Derivation

6 min
756

Even as I am posting this, I can see that my previous post received a hundred and twenty plus views, but no comments yet. I am saying again that my pursuit is not to give an answer, but to ask a question. I only wonder if there is in fact no answer to the questions I am asking – but anyway, I will continue asking them. If you know how to deal with the problems I am setting – or happen to understand they are not problems at all, I will be most grateful for a constructive input in the comments section. I am sorry to say I was unable to make this post sound as light and unpretentious as the previous one. This one deals with harder questions, is a little wordy, and requires at least elementary knowledge of calculus to be read properly.

In my previous post we discussed the ‘Galilean’ velocity composition used for introduction or substantiation of relative simultaneity. It is not the only point where Einstein resorts to sums c + v or c – v: he does that actually to deduce the Lorentz transforms, notwithstanding the fact that a corollary of the Lorentz transforms is a different velocity composition which makes the above sums null and void. It looks like the conclusions of this deduction negate its premises – but this is not the only strange thing about Einstein’s deduction of the Lorentz transforms undertaken by him in his famous 1905 article.

In Paragraph 3 of that paper Einstein is considering the linear function τ (the time of the reference frame in motion) of the four variables x′ = x – vt, y, z, and t (the three spatial coordinates and time of the frame of reference at rest) and eventually derives a relation between the coefficients of this linear function.

+3

## Let’s Discuss Relativity of Simultaneity

4 min
631

There is one only too obvious problem with relativity of simultaneity in the way it is normally introduced, and I have never found an answer to it – what’s more, I never read or heard anyone formulate it. I will be grateful for an enlightening discussion.

The framework of the thought experiment introducing relativity of simultaneity is this. Two rays of light travel in opposite directions and reach their destination simultaneously in one frame of reference and at different moments in the other.

For example, in the Wikipedia article on the subject you can read:

‘A flash of light is given off at the center of the traincar just as the two observers pass each other. For the observer on board the train, the front and back of the traincar are at fixed distances from the light source and as such, according to this observer, the light will reach the front and back of the traincar at the same time.

‘For the observer standing on the platform, on the other hand, the rear of the traincar is moving (catching up) toward the point at which the flash was given off, and the front of the traincar is moving away from it. As the speed of light is finite and the same in all directions for all observers, the light headed for the back of the train will have less distance to cover than the light headed for the front. Thus, the flashes of light will strike the ends of the traincar at different times’.

I am always not a little surprised at the modesty displayed by the authors of such illustrations. If we grant the statement ‘the light headed for the back of the train will have less distance to cover than the light headed for the front’ to be true – how then do we evaluate the magnitude of the effect? Or, in other words, how much longer is one distance in comparison to the other?

+1

## FL_PyTorch is publicly available on GitHub

2 min
983

FL_PyTorch: Optimization Research Simulator for Federated Learning is publicly available on GitHub.

FL_PyTorch is a suite of open-source software written in python that builds on top of one of the most popular research Deep Learning (DL) frameworks PyTorch. We built FL_PyTorch as a research simulator for FL to enable fast development, prototyping, and experimenting with new and existing FL optimization algorithms. Our system supports abstractions that provide researchers with sufficient flexibility to experiment with existing and novel approaches to advance the state-of-the-art. The work is in proceedings of the 2nd International Workshop on Distributed Machine Learning DistributedML 2021. The paper, presentation, and appendix are available in DistributedML’21 Proceedings (https://dl.acm.org/doi/abs/10.1145/3488659.3493775).

The project is distributed in open source form under Apache License Version 2.0. Code Repository: https://github.com/burlachenkok/flpytorch.

To become familiar with that tool, I recommend the following sequence of steps:

-1

## What are neural networks and what do we need them for?

4 min
1.8K

Explaining through simple examples

For a long time, people have been thinking on how to create a computer that could think like a person. The advent of artificial neural networks is a significant step in this direction. Our brain consists of neurons that receive information from sensory organs and process it: we recognize people we know by their faces, and we feel hungry when we see delicious food. All of this is the result of brain neurons working and interacting with each other. This is also the principle that artificial neural networks are based on, simulating the processes occurring in the human brain.

What are neural networks

Artificial neural networks are a software code that imitates the work of a brain and is capable of self-learning. Like a biological network, an artificial network also consists of neurons, but they have a simpler structure.

If you connect neurons into a sufficiently large network with controlled interaction, they will be able to perform quite complex tasks. For example, determining what is shown in a picture, or independently creating a photorealistic image based on a text description.

+1

## Math introduction to Deep Theory

4 min
2K

In this article, we would like to compare the core mathematical bases of the two most popular theories and associative theory.

Calculating deep
+1

## Riddles of the fast Fourier transform

10 min
1.1K
Tutorial

• The method of phase-magnitude interpolation (PMI)

• Accurate measure of frequency, magnitude and phase of signal harmonics

• Detection of resonances

The Fast Fourier Transform (FFT) algorithm is an important tool for analyzing and processing signals of various nature.

It allows to reconstruct magnitude and phase spectrum of a signal into the frequency domain by magnitude sample into the time domain, while the method is computationally optimized with modest memory consumption.

Although there is not losing of any information about the signal during the conversion process (calculations are reversible up to rounding), the algorithm has some peculiarities, which hinder high-precision analysis and fine processing of results further.

The article presents an effective way to overcome such "inconvenient" features of the algorithm.

0

## The Ideal Economy

8 min
7K
Recovery mode

I am not an economist, but in light of current events with cryptocurrencies and the economy in general, I would like to share my thoughts on some kind of ideal economy, around which everything is happening now.

-1

## One does not simply calculate the absolute value

4 min
31K
Translation

It seems that the problem of calculating the absolute value of a number is completely trivial. If the number is negative, change the sign. Otherwise, just leave it as it is. In Java, it may look something like this:

``````public static double abs(double value) {
if (value < 0) {
return -value;
}
return value;
}``````

It seems to be too easy even for a junior interview question. Are there any pitfalls here?

+9

## Measuring Traffic Rate by Means of U-models

21 min
1.5K

Measuring of stream rate in an artist's impression.

In one of our previous publications, we talked about a way to measure event stream rate using a counter based on exponential decay. It turns out that the idea of such a counter has an interesting generalization. This paper by Artem Shvorin and Dmitry Kamaldinov, Qrator Labs, reveals it.
+4

## AngouriMath 1.3 update

5 min
3.9K

Four months of awesome work together with a few new contributors finally result in a new major release, which I'm happy to announce about.

Now we get completely new matrices, improved parser, a lot of new functions, almost rewritten interactive package (for working in Jupyter) and many more.

This article about a big update in a FOSS symbolic algebra library for .NET, I hope it may be interesting for someone!

+5

## Overview of Morris's counters

7 min
999

On implementing streaming algorithms, counting of events often occurs, where an event means something like a packet arrival or a connection establishment. Since the number of events is large, the available memory can become a bottleneck: an ordinary -bit counter allows to take into account no more than events.
One way to handle a larger range of values using the same amount of memory would be approximate counting. This article provides an overview of the well-known Morris algorithm and some generalizations of it.

Another way to reduce the number of bits required for counting mass events is to use decay. We discuss such an approach here [3], and we are going to publish another blog post on this particular topic shortly.

In the beginning of this article, we analyse one straightforward probabilistic calculation algorithm and highlight its shortcomings (Section 2). Then (Section 3), we describe the algorithm proposed by Robert Morris in 1978 and indicate its most essential properties and advantages. For most non-trivial formulas and statements, the text contains our proofs, the demanding reader can find them in the inserts. In the following three sections, we outline valuable extensions of the classic algorithm: you can learn what Morris's counters and exponential decay have in common, how to improve the accuracy by sacrificing the maximum value, and how to handle weighted events efficiently.

+12

## Compilation of math functions into Linq.Expression

12 min
4.9K

Here I am going to cover my own approach to compilation of mathematical functions into Linq.Expression. What we are going to have implemented at the end:

1. Arithmetical operations, trigonometry, and other numerical functions

2. Boolean algebra (logic), less/greater and other operators

3. Arbitrary types as the function's input, output, and those intermediate

Hope it's going to be interesting!