Optimized Exponential Functions for Java

Usually microoptimization is only done in C or C++, but it works quite well in Java too. For a project I needed very fast log() and exp() calculations, and Java’s Math.log() and Math.exp() just doesn’t cut it. After a bit of research I have found the following approximations that are good enough for me:

UPDATE This pow() approximation is obsolete. I have a much faster and more accurate approximation version here.

Fast Exponential Function in Java

The paper “A Fast, Compact Approximation of the Exponential Function” describes a C macro that does a good job at exploiting the IEEE 754 floating-point representation to calculate e^x. I have transformed the macro into Java code:

This code is 5.3 times faster than Math.exp() on my computer. Beware that it is only an approximation, for a detailed analysis read the paper.

Fast Natural Logarithm in Java

I have found the following approximation here, and there is not much information about it except that it is called “Borchardt’s Algorithm” and it is from the book “Dead Reconing: Calculating without instruments”. The approximation is not very good (some might say very bad…), it gets worse the larger the values are. But the approximation is also a monotonic, slowly increasing function, which is good enough for my use case.

This approximation is 11.7 times faster than Math.log().

Fast Power Calculation

Equiped with these optimized functions, it is possible to do several other optimizations. For example you can replace

with

And then use the approximation functions for a highly optimized pow calculation. You can even combine the calculations and simplify it into this:

This is 8.7 times faster than the Math.pow(a, b).

Accuracy

The above functions are very inaccurate, especially the log calculation. So before you use this code you have to test it if the approximation is good enough for you!

Have fun

19 Comments on "Optimized Exponential Functions for Java"

Notify of
avatar
Martin Ankerl
Guest

Here are some benchmarks from a Pentium IV, doing 20 million calculations. On this machine I get an even better performance than stated above. I use Sun’s JRE 1.5.0_08.

6.233 sec, Math.log(val)
0.531 sec, 6*(x-1)/ (x + 1 + 4*(Math.sqrt(x)))

5.920 sec, Math.exp()
1.108 sec, exp optimized with IEEE 754 trick

15.967 sec, Math.pow(a, b)
11.014 sec, e^(b * log(a))
7.607 sec, e^(b * log(a)) + IEEE 754 trick
2.109 sec, e^(b * log(a)) + IEEE 754 trick + LOG approximation
1.827 sec, simplified everything

trackback

[…] I have updated the code for the Math.pow() approximation, now it is 11 times faster on my Pentium IV. Read it this. […]

DoctorEternal
Guest

Thanks for this. Always good to get more performance tweaking out of Java. I use Processing for game dev in Java, and even with it’s use of Jikes, it could still use a boost.

Dr.E
http://www.turingshop.com/reports/01Java/

trackback

[…] I have already written about approximations of e^x, log(x) and pow(a, b) in my post Optimized Exponential Functions for Java. Now I have more . In particular, the pow() function is now even faster, simpler, and more accurate. Without further ado, I proudly give you the brand new approximation: […]

John
Guest
I have used the same trick for float, not double, with some slight modification to the constants to suite IEEE754 float format. The first constant for float is 1<<23/log(2) and the second is 127<<23 (for double they are 1<<20/log(2) and 1023<<20). You don’t need to do the addition as floating point, I have move the braces around… public static double exp(double val) { final long tmp = (long) (1512775 * val) + (1072693248 - 60801); return Double.longBitsToDouble(tmp << 32); } 1234 public static double exp(double val) {    final long tmp = (long) (1512775 * val) + (1072693248 - 60801);    return Double.longBitsToDouble(tmp <<… Read more »
Michel Hummel
Guest

@John
Hello,
i’m very interested by the accuracy optimization you suggested but i don’t undertand what is variable “a” in the expression :
error = (error – a * a) / 186;

Could you explain it please

Thanks
Michel Hummel

Nosredna
Guest

I think a is just mantissa. I’m guessing John started changing the variable name and forgot what he was doing.

Axeia
Guest

The exp functions also seems to be working quite for J2ME which is lacking J2ME.

So if you’re writing say a game with projectiles traveling a certain trajectory it’s good enough 🙂

Jarek
Guest

I confirm that those work with J2ME. Thanks to your math approximations I’ve been able to run Speex voice decoding on a mobile with a decent performance 🙂

Martin Ankerl
Guest

glad that it’s working 🙂

Dick Rochester
Guest
I have inherited some code from a former employee. He has some code he calls double powa(double x, double a) which he says compute x^a for a close to 1.0. The code is double powa(double x, double a) { double lnx = log(x); double am1 = a – 1; double product = lnx * am1; return (product + x * product^2 + x); } I at first thought he was using three terms of a Talor series about 1.0. However, it doesn’t seem to be that. Where did this come from. BTW, I tested it and it does work, i.e.… Read more »
Brad Knox
Guest

This post was really helpful. Thanks!

One thing to point out is that the exp() approximation is only valid for inputs of within the rough bounds of -700 to 700 (Schraudolph, 99).

dharmendra prasad
Guest

Hi I tested the method for calculating x^n and this is not accurate for the powers of order 10 also. 🙁 . Disappointed

Joe Bowbeer
Guest

Schraudolph’s exponential function article is available at http://nic.schraudolph.org/pubs/Schraudolph99.pdf

MartinAnkerl
Guest

Thanks, I have updated the link

Nic Schraudolph
Guest
Hi, I just happened across this… thanks for providing a Java version! I want to share some enhancements which I haven’t gotten around to publishing yet: You can get a much better approximation (piecewise rational instead of linear) at the cost of a single floating-point division by using better_exp(x) = exp(x/2)/exp(-x/2), where exp() is my published approximation but you don’t need the additive constant anymore, you can use c=0. On machines with hardware division this is very attractive. Also, you can use my approximation in reverse (write a float, read it back as an int) to get a fast logarithm… Read more »
Nic Schraudolph
Guest

PS: Just noticed that you also had the idea to use the inverse of my exp() approximation to get a fast logarithm, then combine them for the power function. If you do the same with the improved better_exp() version and its inverse, you get a power function that is still very fast but a lot more accurate. Best regards, nic

Martin Ankerl
Guest

Hi Nic, thanks for your comments, and for your paper! I will give definitely give this a try, and write an updated blog post with the results.

wpDiscuz