# Mathematics of Machine Learning based on Lattice Theory

This is a third article in the series of works (see also first one and second one) describing Machine Learning system based on Lattice Theory named 'VKF-system'. It uses structural (lattice theoretic) approach to representing training objects and their fragments considered to be causes of the target property. The system computes these fragments as similarities between some subsets of training objects. There exists the algebraic theory for such representations, called Formal Concept Analysis (FCA). However the system uses randomized algorithms to remove drawbacks of the unrestricted approach. The details follow…

#### Introduction

We begin with demonstration of our approach by its application to a school problem.
It is to find sufficient conditions on a convex quadrangle possessing symmetries to be circled and to predict this property of the rectangle.

Hence there are two target classes: positive (there exists a circle around the quadrangle) and negative.

The training sample contains square, isosceles trapezoid, diamond, and deltoid (see the rows labels in Table below).

A single test example is the rectangle.

We represent each quadrangle by a subset of attributes related to its possible symmetries:

"There exists a central symmetry point" (A),
"The group of rotations is trivial" (B),
"The group of rotations contains at least two elements" (C),
"There is a diagonal symmetry axis" (D),
"There is a non-diagonal symmetry axis" (E).
They correspond to columns labels in Table below.

quadrangle target A B C D E
square 1 1 0 1 1 1
trapezoid 1 0 1 0 0 1
diamond 0 1 0 1 1 0
deltoid 0 0 1 0 1 0
rectangle ? 1 0 1 0 1

To discover possible causes (in terms of symmetries) the system computes similarities (common attributes) between training examples of same sign. Hence we have



where first subset collects parents (all the training objects whose similarity computed), and the second is a common fragment of these examples.

Since common fragment  is a part of rectangle's description , the system predicts the target property positively, i.e. the rectangle is circled. It corresponds to Analogy cognitive procedure of the JSM-method. The analogues of the rectangle are parents (square and trapezoid) that have the same fragment as common part.

However, we can exchange signs: the similarity between negative examples is



This observation leads to Argumentation Logics, but we prefer to omit the details here. Interested reader refers to the author's papers from Финн В.К., Аншаков О.М., Виноградов Д.В. (Ред.). Многозначные логики и их применения. Том 2: Логики в системах искусственного интеллекта, M.: URSS, 2020, 238 с. ISBN 978-5-382-01977-2 (in Russian).

English translations may be requested from the author (Allerton Press now is a part of Springer, but original translations are not available through any sites).

However the similarity between negative examples demonstrate the 'counter-example forbidden' condition, since its fragment  is a part of description  of opposite sign example 'square'.

#### 1. Formal Concept Analysis

Initially the author planned to represent the theory in terms of so-called JSM-method of automatic hypotheses generation. But its creator said the doubt on possibility to express 'rich ideas of JSM-method in popular fashion'. Hence the author decide to use FCA language for this article. However the author will use some own terms paired with original ones (in brackets) where he prefer to change the terminology.

A sample (=formal context) is a triple  where  and  are finite sets and . The elements of  and  are called objects and attributes, respectively. As usual, we write  instead of  to denote that object  has attribute .

For  and , define



so  is the set of attributes common to all the objects in  and  is the set of objects possessing all the attributes in . The maps  and  are called polars (=derivation operators) of sample .

A candidate (=formal concept) of sample  is defined to be a pair , where , , , and . The first component  of candidate  is called the parents list (=extent) of the candidate, and the second component  is called its fragment (=intent). The set of all candidates of sample  is denoted by .

It is an easy exercise to check that  is a lattice with operations



We use a special case: for , , and  define



We call these operations CbO because the first one is used in well-known Close-by-One (CbO) Algorithm to generate all the elements of .

Most important (monotonicity) property of CbO operations is represented in the following Lemma

Let  be a sample, , , and . Then



#### 2. Problems with FCA

Unfortunately, the author and his colleagues have discovered and investigated some theoretical shortcomings of FCA based approach to Machine Learning:

1. The number of hypotheses can be an exponentially large with respect to the size of input data (training sample) in the worst case.

2. Problem of detection of large hypothesis is computational (NP-)hard.

3. Overtraining is unavoidable and appears in practice.

4. There are 'phantom' similarities between training examples, where each such parent has alternative hypothesis on the target property cause.

To demonstrate drawback 1 we need Boolean algebra case corresponding to the sample of coatoms as positive examples:

    
 0 1  1
 1 0  1
    
 1 1  0

Then it is easy to check that any pair  is a candidate. Hence there are  candidates.

To evaluate the exponential growth of the output with respect to the input, estimate memory needed to store the sample for  as 128 bytes and memory for  candidates as  bites, i.e. 16 Gigabytes!

Drawback 2 was discovered by Prof. Sergei O. Kuznetsov (HSE Moscow).

Shortcomings 3 and 4 were discovered by the author during his Dr.Science investigations. He introduced several probabilistic models to generate 'phantom' similarities together with corresponding counter-examples to deny them. Most clear result is the asymptotic theorem that asserts that a probability of generation of 'phantom' similarity between two parents without counter-examples tends to



when the probability of appearance of each attribute (considered as i.i.d. Bernoulli variable) is , the number of counter-examples is  and attributes number  too.

Note, that even smaller number  is positive since it coincides with the probability that the Poisson variable with mean  has value >1.

Consult with the author's Dr.Science Thesis for more details and results.

#### 3. Randomized Algorithms

The key idea of VKF-method is to random generate a small subset of the lattice of candidates and to use its elements as hypothetical causes for the target property. By this trick we avoid the exponentially high computational complexity of usual algorithms of FCA (and JSM-method too).

So we need algorithms like random walks on the huge lattice with generation of a candidate only when we need it.

The author invented and investigated mathematical properties of several algorithms (such as non-monotonic, monotonic, coupled, lazy coupled, and stopped coupled Markov chains). Details may be found in the author's Dr.Science Thesis.

Now we represent the coupled Markov chain algorithm that is a core of probabilistic approach to machine learning based on FCA (VKF-method).

input: sample (G,M,I), external functions CbO( , )
result: random candidate <A,B>
X=G U M;
A = M'; B = M;
C = G; D = G';
while (A!=C || B!= D) {
select random element x from X;
<A,B> = CbO(<A,B>,x);
<C,D> = CbO(<C,D>,x);
}

There exists a lazy variant of the coupled Markov chain. The author proved that lazy computations lead to acceleration (with respect to classical scheme above) up-to



times, where  is the attributes total and  is a number of training examples.

This result matches well to experimental estimates obtained by former RSUH student Lyudmila A. Yakimova.

#### 4. General Structure of VKF-method

In supervised Machine Learning there are two sets of objects called the training and test samples, respectively.

From positive examples of the training sample the program generates a sample . The negative examples form the set  of counter-examples (obstacles to become a VKF-hypothesis).

Set  of tests contains all test objects to predict the target class.

The program invokes the algorithm of the lazy coupled Markov chain to generate random candidate . The program saves VKF-hypothesis , if there is no obstacle  such that .

The main Inductive Generalization Algorithm is the following

input: number N of VKF-hypotheses to generate
result: random sample S of requested VKF-hypotheses
while (i<N) {
generate random candidate <A,B> for (G,M,I);
hasObstacle = false;
for (o in O) {
if (B is a part of {o}') hasObstacle = true;
}
if (hasObstacle == false) {
S = S U {<A,B>};
i = i+1;
}
}

Condition  means the inclusion of fragment  of candidate  into the fragment (attributes subset) of counter-example .

If a candidate avoids all such obstacles it is added to the result set of generated VKF-hypotheses.

We replace a time-consuming deterministic algorithm (for instance, the well-known "Close-by-One" algorithm) for generation of the all candidates by the probabilistic one to randomly generate the prescribed number of VKF-hypotheses.

After that Machine Learning system predicts the target class of tests and compares the results of prediction with the original target values. This is Prediction Algorithm

input: list T of test examples to predict the target property
input: random sample S of candidates without counter-examples
for (x in T) {
target(x) = false;
for (<A,B> in S) {
if (B is a part of {x}') target(x) = true;
}
}

The worst situation occurs when some important positive test is missed by all generated VKF-hypotheses and obtains negative sign.

Test object  is an -important, if the probability of all VKF-hypotheses  with  exceeds .

The author proved theorem to estimate parameter  from Inductive Generalization Algorithm to avoid the worst case.

For , for any , and any  random sample  of VKF-hypotheses of cardinality



with probability  has property that every -important object  contains a fragment of some VKF-hypothesis , i.e. .

This theorem is an analogue of famous results of Prof. Vladimir N. Vapnik and Prof. Alexey Y. Chervonenkis from Computational Learning Theory.

#### Conclusion

The article describes main mathematical aspects of Machine Learning system based on Lattice Theory. The author call it 'VKF-system' in honour his teacher Prof. Victor K. Finn.

The last article of the series will be devoted to representations of objects with attributes of different types for application of described here Learning Machine.

Discrete attributes again require some technique from FCA. Continuous attributes ask for logistic regression, entropy-based separation of their ranges into subintervals, and presentation corresponding to convex envelope for subintervals those similarity is computed.

The author would like to thanks his colleagues and students for support and stimulus.

## Similar posts

AdBlock has stolen the banner, but banners are not teeth — they will be back