Pull to refresh

Anonymity and Authenticity

Reading time15 min
Views1.4K

The following text consists of two logically connected parts. The first part constructively rules out the assumption that untraceability supposes anonymity. The second part enumerates specific practical tasks in the form of various scenarios when digital signatures (DS) do not provide correct solutions to the task. It is demonstrated that a complete solution can be obtained through a special combination of DS and an interactive anonymous identification protocol.

Introduction

Let us first explain the reasons for writing this article.

First of all, we mentioned a growing number of publications, including those posted on this platform, which address various aspects of such a highly demanded security service as anonymity. You can find examples here and here, as well as here.

There is nothing unusual about this. The modern information environment is built on the basis of electronic means of communication, data processing and storage. That's how we achieved mass dissemination of information and its widespread availability. At the same time, the Internet is facing an increasing level of censorship and a growing number of attempts to identify those who disagree with the mainstream point of view. Obviously, such actions stimulate the demand for various methods of counteraction, including anonymity. Here we appeal to the intuitive understanding of this term, although we give a precise definition below.

On the other hand, in the mentioned posts, in particular in this one, we can see that the Author claims to offer a comprehensive “theory”. This is what is unambiguously implied by the name of that article. However, if we are talking about a classical theoretical approach, then it is necessary to ensure the completeness and consistency of the provisions that make up the semantic content of the latter. Unfortunately, there are some omissions here, which we will try to demonstrate with a simple counterexample. We clarify that the Author of the text entitled “The Theory of the Structure of Hidden Systems” was notified of our intentions in advance. When referring to the author of that article, we will use the appellative “Author” with a capital letter, thereby emphasizing an exceptionally respectful, but at the same time constructive-critical attitude.

Terms and Definitions

Let's start by clarifying the terminology.

By “personal data” is meant a data structure whose target fields contain a variety of information related to an individual. Here, passport data can be seen as an analogy. However, an extended interpretation is also possible. It is fair to assume that similar information is included in a public key certificate, if one is issued and maintained.

For simplicity, let's consider a two-way interaction that involves a sender and a receiver of messages. In some reasoning, when it is not necessary to distinguish between sender and receiver, it is more convenient to use the term “subject”. Let the personal data of the subject be associated with some abstract entity, for example, an address, a sequence of numbers, letters and special characters, an image, a random sequence, and so on. Let us reduce the set of such entities to the general concept of identifier. Let us also assume that the mapping of personal data to an identifier is bijective and one-way. In other words, mapping to an identifier is performed with polynomial computational complexity, while the reverse mapping is performed with superpolynomial computational complexity. Without going into the subtleties of definitions, we will assume that the simplicity of computation of an identifier is compensated by the complexity of restoring personal data using this identifier.

The characteristics of the identifier are determined by the selected mapping. For example, if we use a cryptographic hash function for this purpose, then based on the random oracle model, the resulting identifier will satisfy a uniform distribution under the condition of determinacy and polynomial computational complexity.

We will use the term anonymity to refer to such a security service when, with a known identifier, a subject with limited resources, both computational and memory, is unable to disclose in real time the personal data of another subject. The Author of the above mentioned article uses the term “unlinkability” to denote anonymity. We have nothing against new terms, provided that they do not duplicate existing ones and bring additional meanings. In this case, we failed to capture the focus of the proposed innovation.

Since the Author is mainly talking about untraceability, it is necessary to clarify this security service, which is largely related to network interaction. Untraceability is achieved when the recipient cannot determine the sender's network address. In another way, to determine this address, he needs to solve the problem of superpolynomial complexity. However, one should distinguish:

  • information-theoretic approach, when the solution can either be guessed with a certain probability, or, in the presence of a decision criteria, obtained by brute-force attack,

  • computational-complexity approach, which is reduced to solving a well-known computational problem.

The difference here is that there are a number of computational problems, which are not proved to be contained in NP. At the same time, there are algorithms of subexponential complexity to solve these problems on a non-quantum computer but algorithms of polynomial complexity for the same purposes remain unknown. Note that, from the point of view of vulnerability, with a certain set of parameters, the subexponential complexity is comparable to the polynomial one, and to eliminate the flaw, one has to choose parameters that negatively affect performance, namely, they lead to an increase in delay, allocation of additional memory, and other expenses.

The Author introduces a new term “non-observability” to designate untraceability. However, this new term also does not seem to add anything new to the meaning of the already existing term. Disputes about the content of a particular term can be endless. But we believe that here one should be guided by a simple rule of "first hand" – the term “untraceability” with the corresponding semantic scope was introduced twenty years ago in the book of Chmora A.L. Modern applied cryptography. 2nd ed. Moscow: Helios, ARV, 2002. 256 p. And since then it has been used everywhere.

Counterexample

Omitting terminology discussions, the Author talks about untraceability. His main concepts could be considered true, unless there wasn't a slight inconsistency. The Author claims that untraceability includes anonymity. Let's quote the Author:

“The non-observability criterion already includes the unlinkability criterion. If we go from the opposite and assume the falsity of this judgment (that is, the absence of unlinkability in non-observability), then it would be possible to determine the existence of information subjects with the help of unlinkability and, thereby, admit the possibility of non-observability violation. And this contradicts the latter.”

This statement is not clear enough but it is presented as a formal proof, although in fact it is not. Our task is to show why this statement is not true.

It should be noted right away that our interpretation of anonymity as a security service corresponds to a regulatory model where personal data is not disclosed neither to the subjects of interaction nor to an outside observer. Due to the de-actualization of anonymity and its replacement with confidentiality, we consider untenable a simplified model, in which personal data is always known to the subjects, but is not disclosed to an outside observer.

Personal data differs from private keys in that it can be disclosed upon request, but only as a result of the free will of their owner. Given that the private keys are not subject to disclosure under any circumstances.

Looking at the issue more broadly, it makes sense to interpret anonymity in the context of basic security services such as confidentiality, authenticity, and integrity. For objective reasons, we restrict ourselves to methods of public-key cryptography. We prefer to mention the important role of public key certificates. Such certificates allow you to confirm (or disprove) the authenticity of public keys during encryption and verification of DS. It's worth noting that the public key certificate includes, among other things, the personal data of the key's owner.

The transmission of a single encrypted message guarantees the sender’s anonymity, but not the recipient’s one. Let's explain. It is clear that only the owner of the paired private key can decrypt this message, and the authentication of the public key from this pair is carried out by the sender, despite the fact that no one except him knows whose particular key he used to encrypt the message. The authentication of the recipient's public key using the corresponding certificate reveals the recipient's personal data. Let's also assume that during such a transmission, the sender knowingly does not disclose his personal data. The lack of anonymity of the recipient is explained by the fact that his personal data is known to the sender and can be disclosed (the sender is not interested in disclosing his own personal data, but this rule may not apply to other's personal data). There is a simple principle: “If someone other than yourself knows your personal data, then they can be disclosed.” However, to ensure untraceability, additional means are needed, such as network coding. Here we see that anonymity and untraceability exist on their own and do not correlate in any way.

When it is necessary to guarantee the authenticity and integrity, the message is certified with a DS using the sender's private key. Let's call the person who signs a witness. Now the recipient must verify the authenticity of the public key of the witness (sender) and, after it is confirmed, proceed to verify the DS of the message. For said purpose, the recipient recovers the public key certificate of the witness and checks the DS, which is part of the certificate and is generated by a trusted certification authority. The validity of this DS indicates that the public key from the certificate is authentic. As we noted above, the certificate also contains personal data of the witness. This in turn means that anonymity is impossible. It does not matter at all whether untraceability is ensured or not. The message can first be certified with a digital signature using the private key of the witness, and then encrypted using the recipient's public key. Obviously, such a scheme also does not ensure any anonymity of the subjects interacting. This is the very counterexample that points to the inconsistency of the “proof” presented by the Author.

It is logical to conclude that anonymity is achievable by refusing public key certificates. Sometimes this is justified (find out more), but when solving the vast majority of conventional tasks, such a radical rejection of public-key cryptography principles leads to the discreditation of the basic security services.

It is impossible to throw overboard public-key cryptography. For example, Diffie-Hellman protocol is used within a session secret key agreement for its further use in a symmetric scheme for encryption/decryption. This protocol supposes that, in order to counteract the men-in-the-middle (MItM) attack, session (short-term) public keys are certified by the DS, while the verification of the DS is performed using certificates of long-term public keys of the subjects. A similar case can be observed when using the “digital envelope” technique.

Formulation of the problem

If we cannot completely give up with public-key cryptography or public key certificates, then what can we do? Are there other solutions?

We tried to answer these questions by developing an interactive anonymous identification protocol (learn more). Since that publication, we have proceeded with our research and obtained new results. Over time, we will reveal our discoveries in detail, but currently we will limit ourselves to setting the task, and also explain the logic of reasoning that the developers followed.

The creation of the anonymous identification protocol began with the development of its MItM-immunized version. Initially, it was clear that there was another, no less urgent task to move from identification to the organization of a secure tunnel for secure data transmission. Such a virtual tunnel can be implemented using a secret key agreement protocol, such as the well-known Diffie-Hellman protocol. However, despite the fact that the MItM-immunized identification protocol is certainly self-sufficient and in demand when solving problems of control access to various resources, it's obvious that the use of this protocol in combination with the Diffie-Hellman protocol nullifies MItM immunization, since the confirmation or denial of affiliation to a local community of participants (and this is the meaning of identification) cannot be factored in the secret key agreement protocol, which exists separately from the identification protocol. In other words, an attacker can "wedge" between separate sessions of two different protocols, identification and key negotiation, and execute impersonation subsequently impossible to detect.

In addition, any identification protocol consists of one and a half rounds. It has been proven that it cannot be less. The Diffie-Hellman protocol consists of one round. In total, we have two and a half rounds. Here, by round we mean a two-way exchange of single messages. A round and a half means a two-way exchange of single messages with an additional receipt.

As part of our research, we were tasked to combine the identification and key agreement in a single protocol so that its final communication complexity (the number of messages exchanged by the parties) did not exceed one and a half rounds.

The amount of data transferred also matters. To reduce its volume, developers have turned to cryptographic methods based on elliptic curve point arithmetic. These methods guarantee an adequate level of cryptographic strength with the possibility of effective data compression. This explains the use of such a cryptographic primitive as the pairing of points of an elliptic curve, while the points themselves are presented in a compressed form with only one coordinate indicated, while the second coordinate can be efficiently calculated.

Let us ask ourselves the following question: Are there any practical problems when anonymity and authenticity are required at the same time? Yes, such situations exist. We offer several scenarios below.

To simplify the presentation, we omit the details that are not essential for understanding, related to the registration, generation and distribution of keys.

Scenario 1: Remote healthcare

Let there be a remote medical service that includes a set of different services: preliminary treatment, diagnostics, therapeutic recommendations, prescribing drugs, etc. A potential patient (patient X) remotely contacts the medical center. The first thing he must do is to prove that he is their client, a “friend”, not a foe. In other words, he has to verify that he owns an insurance that covers requested services, as well as he satisfies other obligatory requirements. At this point the identification stage is activated. It is aimed at detecting friends and foes. If the insurance is available and not overdue, then X will be able to prove that he is “a friend” and has a right to have medical care. At this stage, anonymity is preserved – the patient's personal data is not disclosed. If X has insurance and it is confirmed, then he can request medical care. The medical center decides whether to provide them or not (for example, if insurance is available, but does not cover requested services).

Let's assume that services are provided, then the medical center and X can establish a secure tunnel for data transmission. This is done by jointly generating a shared session secret key for X and the medical center. To do this, both parties use the secret material that they formed at the identification stage. When the tunnel is established, X must transfer his personal data to the medical center in order to access the medical card. 

And here comes the problem. 

Patient X can transfer not his own data, but someone else's. For example, he wants another person to receive medical care under his insurance. DS does not help here either. It is easy to obtain a DS of personal data of third parties in various ways, for example, as a result of collusion or using social engineering methods (phishing, etc). Therefore, the center, acting as a verifier, should be able to reliably verify that it signed the one who, in the current session, proved that he is “a friend”, i.e. a client of the medical center. 

This is when the instant mode of DS developed by us will come in handy. It's worth underpinning that this is exactly the mode, since no restrictions are imposed on the DS scheme itself. The uniqueness of the mode is that it works only with our identification protocol and does not work with other protocols.

Finally, the personal data certified by the DS are compared with the data from the public key certificate, which is used to verify the DS. If the data matches, then it reliably belongs to the one who in the current session proved that he is “a friend”.

Scenario 2: Competitive bidding

Let’s imagine an individual/legal entity which requests a bank loan. The bank verifies the applied documents, analyzes the credit history, current balance, and ultimately decides if the applicant deserves to receive a loan on the agreed terms. If the bank says “yes”, it inputs information about the borrower into a special permissive ledger, and then generates personal public and private keys. The ledger is also associated with a group public key, which is generated based on borrower data. We wrote about the group public key in detail here.

The peculiarity is that the personal public key certificate contains information that certifies that the owner has been approved for a loan, indicating its size, interest rate, repayment terms, creditor bank details, and so on, but instead of personal data it uses an internal bank identifier. Hereinafter, we will call such certificates secondary. This needs to be clarified. Obviously, a certificate authority will refuse a certificate if it cannot verify the identity of the person who requested it. However, such a certificate may be issued in the name of an individual, such as an authorized officer of a bank responsible for managing loans. The employee requests a certificate based on a mark in the permissive ledger.

Let's suppose that an approved bank loan is one of the prerequisites for participation in competitive bidding. The applicant can use the interactive protocol and prove that he has the right to take part in the competition. The opposite party (tender commission) uses the group public key to verify the proof. However, only a borrower who has a private key can really prove it. If the evidence is accepted by the tender commission, then the applicant becomes a bidder.

Options are possible. For example, if the borrower was first included in the registry and then excluded from it, then his data will not be taken into account when generating a group public key, and the secondary certificate will be revoked. This means that the proof will be rejected despite the existence of a private key.

The anonymity of the participant is guaranteed, since the secondary certificate does not contain any personal data of the borrower. However, after the proof is accepted by the tender committee, the participant can use a secure tunnel to transfer his personal data certified by an individual DS (IDS), which is signed/verified using separate keys that are in no way related to the borrower's keys. The fundamental difference relates to certificates: the secondary certificate is issued based on a personal public key, while the certificate of any other public key, including the one used for verification of an individual digital signature, includes the personal data of its owner.

We emphasize that only the competition commission has access to the IDS. The use of IDS in the instant mode makes it possible to rule out the use of personal data of third parties. Since all the necessary information about the participant is specified in the target fields of the public key certificate used for IDS verification, the competition commission can verify the received personal data using the information from this certificate. Obviously, without such verification, an attacker could certify other people's personal data with his own DS. The confirmed IDS validity and successfully passed verification indicates that received personal data is authentic.

Information about participants is not disclosed to third parties. This minimizes risks of corruption. For example, it eliminates possible pressure on participants to change the results of competitive bidding. We emphasize that proof verification does not involve interaction with the creditor bank.

If, for example, the applicant first certifies his personal data with IDS, and then performs encryption using the public key from the certificate issued to the tender committee, then the information is also not disclosed to a third party, but in this case there will be no proof. Having deciphered and confirmed the IDS validity, the tender commission is forced to contact the bank to verify the creditworthiness of the applicant. And this leads to additional expenses both for the bank and the tender commission.

Scenario 3. Digital notary

The demand for digital notary services is due to the mass adoption of electronic document management both in the field of public administration and in various business practices. In the future, the party that requests such services (natural or legal person) will be called the client, and the party that provides this service will be called the notary. When interacting remotely, it is necessary to guarantee the client's anonymity at the initial stage, as well as to ensure the confidentiality, integrity and authenticity of data when transmitted over insecure communication channels.

The client registers and receives a private key from the trusted party, with a secondary certificate of the personal public key additionally issued. Such a certificate does not contain personal data of the client, but includes general information. For example, citizenship, age, residence, as well as other information that, on the one hand, does not provide ground for client's deanonymization, but, on the other hand, confirms that the client has the right to receive these services. For an individual, such a certificate is issued in parallel with the passport and can be issued, for example, in the name of an authorized officer of the Federal Migration Service. A group public key and certificate associated with it is generated based on information about clients.

The notary has his own pair of keys and a certificate is generated for the public key from this pair.

Before receiving necessary services, the client must prove that he is entitled to them. For this purpose, an identification protocol is initiated, and if the proof presented by the client is accepted by the notary, then the joint efforts of the parties develop a common session secret key. In this case, each of the parties uses the secret data generated at the identification stage. Further interaction is carried out through a secure data transfer tunnel.

Let's assume that the service is to certify some electronic document, such as a power of attorney, using a notary's DS. For this purpose, the client submits to the notary his personal data certified by the individual DS. Instant mode rules out the use of personal data by third parties. The signing/verification of the IDS is executed using a unique pair of keys. For a public key from this pair, a certificate is issued, with the personal data of the owner indicated in the target fields. The validity of IDS provides the verification of personal data. To do this, information from the target fields of the corresponding certificate is used.

If the personal data is confirmed after verification, the notary notifies the client that he is ready to provide appropriate services. We emphasize that the notary is not limited by anything and can resort to any legal means of verifying personal data. The client transfers the electronic document to the notary, who certifies it with his DS and then returns it to the client along with the signature.

When requesting a service, the notary checks the proof using group and personal public keys from the corresponding certificates, the content of which does not allow deanonymizing the client. Personal data and an electronic document are transmitted in encrypted form and can only be decrypted by a notary. Therefore, anonymity is guaranteed, as well as the confidentiality, integrity and authenticity of the data of a particular session.

In all the scenarios discussed above, the interactive anonymous identification protocol plays a key role, since a successful interaction means that the prover really knows the private key, while in the case of DS, options are possible. As we previously emphasized, DS can be obtained in various ways, including those that do not require knowledge of the private key. Indeed, as a result of planned actions, it is not difficult to force the owner of personal data to sign using a private key known only to him. The danger is that such a routine and completely legal operation usually does not arouse the suspicions of the witness. In addition, DS-certified personal data is often distributed through insecure communication channels and stored in public memory, which also exacerbates the risk of their use for impersonation purposes.

The fundamental difference is based on the fact that the interactive protocol executes identification at a given specific point in time in accordance with the principle of “here and now” (online), and the signing is carried out in advance (offline). As a rule, the signing is separated in time from the verification, for this reason it is sometimes not possible to trace the source of its origin.

Conclusion

This article contains polemical reflections inspired by a number of publications dedicated to anonymity and untraceability. We also made an attempt to study such security services as anonymity and authenticity, which, on the one hand, fundamentally contradict each other, but on the other hand, are in demand in practical applications.

Tags:
Hubs:
Rating0
Comments0

Articles