It seems that the problem of calculating the absolute value of a number is completely trivial. If the number is negative, change the sign. Otherwise, just leave it as it is. In Java, it may look something like this:

```
public static double abs(double value) {
if (value < 0) {
return -value;
}
return value;
}
```

It seems to be too easy even for a junior interview question. Are there any pitfalls here?

Let's remember that in the IEEE-754 standard, and particularly in Java, there are two zeros: +0.0 and -0.0. As twin brothers, they are very easy to mix, but in fact, they are different. The difference can be seen not only in the textual representation but also when performing some operations. For example, if you divide 1.0 by +0.0 and -0.0, you get completely different answers: +Infinity and -Infinity. However, in comparison operations, +0.0 and -0.0 are indistinguishable. Therefore, the implementation above does not remove the minus sign for -0.0 input. This can lead to unexpected results. For example:

```
double x = -0.0;
if (1 / abs(x) < 0) {
System.out.println("oops");
}
```

At a first glance, the inverse of the `x`

modulus cannot be negative, whatever the `x`

value is. But in this case, it can. If you have a sadistic mood, ask your candidate to write the `abs`

method during the interview. When they produces a method like the one at the beginning of the article, you can ask them is there a value of `x`

so that the condition `1 / abs (x) < 0`

is true. After such interviews, there will be rumors about your company.

Okay, we identified the problem. Now, how to fix it? Naively adding `if (value < 0 || value == -0.0)`

will fail, because `+0.0 == -0.0`

. As a result, we will make it even worse: now `-0.0`

will be returned for a positive zero input as well. To reliably distinguish negative and positive zero, there is a `Double.compare`

method:

```
public static double abs(double value) {
if (value < 0 || Double.compare(value, -0.0) == 0) {
return -value;
}
return value;
}
```

This works. However, our method becomes very slow for such a trivial operation. `Double.compare`

implementation is not so simple. It requires two additional comparisons for a positive number, three comparisons for `-0.0`

and as much as four comparisons for `+0.0`

. If we take a look at `Double.compare`

source code, we can see that we need only a `doubleToLongBits`

part. This method reinterprets binary representation of a `double`

number as a `long`

number (both are 8 bytes long). When you compare integral numbers, you have no surprises. So we can simplify the code in this way:

```
private static final long MINUS_ZERO_LONG_BITS =
Double.doubleToLongBits(-0.0);
public static double abs(double value) {
if (value < 0 ||
Double.doubleToLongBits(value) == MINUS_ZERO_LONG_BITS) {
return -value;
}
return value;
}
```

However, it turns out that `doubleToLongBits`

is also not entirely trivial, because it canonicalizes NaNs. There are many ways to encode a not-a-number as a `double`

, but only one of them is canonical. These different NaNs are even more similar twins, they can be distinguished neither via `Double.compare`

, nor via any operation. Even string representation is the same. But they look different in computer memory. To avoid surprises, `doubleToLongBits`

converts any `NaN`

to the canonical form, which is encoded in `long`

as `0x7ff8000000000000L`

. Of course, this procedure adds more conditions, which we do not need here.

What can we do? It appears that there's another method `doubleToRawLongBits`

. It doesn't do any smart conversion of `NaN`

and just returns exactly the same bit representation:

```
private static final long MINUS_ZERO_LONG_BITS =
Double.doubleToRawLongBits(-0.0);
public static double abs(double value) {
if (value < 0 ||
Double.doubleToRawLongBits(value) == MINUS_ZERO_LONG_BITS) {
return -value;
}
return value;
}
```

The JIT compiler can remove the `doubleToRawLongBits`

method call completely, because it is simply a matter of reinterpreting the set of bits stored in the CPU register so that our Java data types agree. But the bits themselves remain the same and the CPU does not usually care about data types. Although there are rumors saying that this call may still lead to a transfer from a floating-point register to a general-purpose register. Still it's very fast.

Good, now we have only two conditional branches for all positive numbers and zeros. Still, it seems like a lot. We know that branches are bad. If the CPU branch predictor guesses incorrectly, they can be very expensive. Can we do less? It turns out that we can turn both positive and negative zero into a positive one by subtracting it from `0.0`

:

```
System.out.println(0.0-(-0.0)); // 0.0
System.out.println(0.0-(+0.0)); // 0.0
```

So, we can rewrite the implementation in the following way:

```
public static double abs(double value) {
if (value == 0) {
return 0.0 - value;
}
if (value < 0) {
return -value;
}
return value;
}
```

You may wonder, why so complex. Why not just returning a 0.0 constant in the first condition. And we did not reduce a number of comparisons, there are still two of them. However, now we can notice that for ordinary (non-zero) negative numbers, `0.0 - value`

and `-value`

produce the identical result. Thanks to this, we can merge both branches into one:

```
public static double abs(double value) {
if (value <= 0) {
return 0.0 - value;
}
return value;
}
```

Great, now we have only one branch. Can we celebrate the victory? Or probably we can go down to zero branches? Is it possible?

If we look at binary representation of a double-precision number in IEEE-754 format, we can see that the sign is just a most-significant bit. Accordingly, we just need to unconditionally clear this most significant bit. The rest of the number does not change during this operation. In this regard, fractional numbers are even simpler than integers, where negative ones turn into positive ones through two's complement. We can clear the most significant bit through the `&`

operation with the correct mask. Well, we also need to reinterpret a `double`

number as `long`

(we already know how to do this), and after the operation reinterpret it back (this can be done via `longBitsToDouble`

method which is also practically free):

```
public static double abs(double value) {
return Double.longBitsToDouble(
Double.doubleToRawLongBits(value) & 0x7fffffffffffffffL);
}
```

This implementation does not contain branches at all, and profiling shows that the throughput of the method under certain conditions increases by 10%. The previous single-branch implementation has been in the Java standard library for ages, but in the upcoming Java 18, the improved version is already committed.

In many cases, however, these improvements do not mean anything. That's because the JIT compiler can use the appropriate assembly instruction, if available, and completely ignore the Java code. For example, the ARM platform uses VABS instruction. So unlikely this change will make your programs considerably faster. But thanks to it I could write a (hopefully!) interesting article.