Heavy computing with GLSL – Part 2: Emulated double precision

Introduction

In my last post I introduced a simple mandelbrot fractal shader with GLSL. Unfortunately Intentionally, the shader code uses single precision floating point variables which ensures great performance but limits the zoom factor to about 17 before the lack of accuracy of the floating point variables takes over and all you get is a block-ish image of some sort at greater zoom levels:

Since I am very interested to discover the beauty of our mandelbrot set in close detail I will improve the existing shader with emulated double precision variables (aka: double-single) and see how far I can push it.

Since I didn’t find any GLSL code samples using emulated precision I decided to use the DSFUN90 library by David H. Bailey which is written in Fortran. Fontran is no good for GPUs so I had to convert the parts I needed to GLSL.

Single precision floats can hold up to 8 digits and an exponent. Say, when you want to store the numer 0.4888129819481270 in a single float variable you will get 4.8881298e-1 (8 digits and an exponent). The remainder (the green part) is lost. On the other hand you can store the number 0.0000000019481270 in a single float without having any trouble (1.9481270e-9). Check out this page to convert any number to their single or double precision counterparts and see what happens.

You may have figured out by now, that you can store the number 0.4888129819481270 as the sum of 4.8881298e-1 and 1.9481270e-9 and that it is possible to store each of these two parts in a single floating point variable. So we just split the double precision value in two single precision variables as shown above and we are fine. Well, we’re almost fine, since the functions to do the basic math stuff like addition or multiplication get a bit more complicated but that’s the point where Mr. Bailey’s library comes in and helps us out with his emulated double precision arithmetic.

Main Application

Preparing the double precision variables before transferring the to our shader works as described above:

1. take a double (0.4888129819481270) and convert it to single float (4.8881298e-1). Store it.
2. convert the single float back to double (0.4888129800000000) and subtract it from our original value.
3. Store the result (0.0000000019481270) in the second float (1.9481270e-9).
 1 2 3 4                  vec2[0] = (float)xpos;                 vec2[1] = xpos - (double)vec2[0];                 ShaderProgram->setUniformValue("ds_cx0",  vec2[0]);                 ShaderProgram->setUniformValue("ds_cx1",  vec2[1]);
                vec2[0] = (float)xpos;
vec2[1] = xpos - (double)vec2[0];
ShaderProgram->setUniformValue("ds_cx1",  vec2[1]);

The blue and green parts can be seen as High- and Low-Part of our emulated double value.

Emulated arithmetics

The emulated double precision values (double-single) can be stored as vec2 in GLSL. This makes the code short and readability is improved (vec2(ds_hi, ds_lo)).

To evaluate our mandelbrot formula (z = vec2(z.x*z.x – z.y*z.y, 2.0*z.x*z.y) + c) and do the other stuff to create a cool looking image, we need the following arithmetics:

• Convert to/from emulated double precision (double-single)
• Multiplication
• Comparison

Conversion to double-single is easy since you just copy the value to the High part of the double-single (DS) variable.

 1 2 3 4 5 6 7 8  vec2 ds_set(float a) {  vec2 z;  z.x = a;  z.y = 0.0;  return z; } vec2 ds_two = ds_set(2.0);
vec2 ds_set(float a)
{
vec2 z;
z.x = a;
z.y = 0.0;
return z;
}
vec2 ds_two = ds_set(2.0);

To create a single float from our DS variable we just use the High-part and leave the (much smaller) Low-part out.

 1  float s_two = ds_two.x;
float s_two = ds_two.x;

Addition is a bit more complex since you have to take care of some carry over from low to high part.

 1 2 3 4 5 6 7 8 9 10 11 12 13  vec2 ds_add (vec2 dsa, vec2 dsb) { vec2 dsc; float t1, t2, e;    t1 = dsa.x + dsb.x;  e = t1 - dsa.x;  t2 = ((dsb.x - e) + (dsa.x - (t1 - e))) + dsa.y + dsb.y;    dsc.x = t1 + t2;  dsc.y = t2 - (dsc.x - t1);  return dsc; }
vec2 ds_add (vec2 dsa, vec2 dsb)
{
vec2 dsc;
float t1, t2, e;

t1 = dsa.x + dsb.x;
e = t1 - dsa.x;
t2 = ((dsb.x - e) + (dsa.x - (t1 - e))) + dsa.y + dsb.y;

dsc.x = t1 + t2;
dsc.y = t2 - (dsc.x - t1);
return dsc;
}

Multiplication is even more weird…

 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27  vec2 ds_mul (vec2 dsa, vec2 dsb) { vec2 dsc; float c11, c21, c2, e, t1, t2; float a1, a2, b1, b2, cona, conb, split = 8193.;    cona = dsa.x * split;  conb = dsb.x * split;  a1 = cona - (cona - dsa.x);  b1 = conb - (conb - dsb.x);  a2 = dsa.x - a1;  b2 = dsb.x - b1;    c11 = dsa.x * dsb.x;  c21 = a2 * b2 + (a2 * b1 + (a1 * b2 + (a1 * b1 - c11)));    c2 = dsa.x * dsb.y + dsa.y * dsb.x;    t1 = c11 + c2;  e = t1 - c11;  t2 = dsa.y * dsb.y + ((c2 - e) + (c11 - (t1 - e))) + c21;    dsc.x = t1 + t2;  dsc.y = t2 - (dsc.x - t1);    return dsc; }
vec2 ds_mul (vec2 dsa, vec2 dsb)
{
vec2 dsc;
float c11, c21, c2, e, t1, t2;
float a1, a2, b1, b2, cona, conb, split = 8193.;

cona = dsa.x * split;
conb = dsb.x * split;
a1 = cona - (cona - dsa.x);
b1 = conb - (conb - dsb.x);
a2 = dsa.x - a1;
b2 = dsb.x - b1;

c11 = dsa.x * dsb.x;
c21 = a2 * b2 + (a2 * b1 + (a1 * b2 + (a1 * b1 - c11)));

c2 = dsa.x * dsb.y + dsa.y * dsb.x;

t1 = c11 + c2;
e = t1 - c11;
t2 = dsa.y * dsb.y + ((c2 - e) + (c11 - (t1 - e))) + c21;

dsc.x = t1 + t2;
dsc.y = t2 - (dsc.x - t1);

return dsc;
}

Smooth coloring

I have also improved the coloring method to smooth, continuous coloring as described in this post by Linas Vepstas or on wikipedia.

 1 2 3 4      if(length(z) > radius)         {         return(float(n) + 1. - log(log(length(z)))/log(2.));                 }
	if(length(z) > radius)
{
return(float(n) + 1. - log(log(length(z)))/log(2.));
}

Don’t be scared of all that logarithm stuff. Most modern GPU can handle these very well.

Result

Starting with our block example above the emulation shows excellent details for zoom levels up to 42 before the precision of our emulated doubles is spent:

Performance

I get the following framerates in benchmark mode on my desktop ATI HD4870. They show that the emulation is round about 4 times slower than single precision mode. But still qualifies for realtime rendering.

Conclusion

Compared to single precision with a maximum resolution of $59\cdot&space;10^{-9}$ units in the complex plane per pixel the emulated doubles perform well up to $1.7\cdot&space;10^{-15}$ units per pixel.

Emulated double precision is a cool thing to do and works quite well on modern GPUs. Let’s see if there can be done something to improve accuracy further…

Qt Sourcecode

updated version, see this post for details: GLSL_EmuMandel.zip

http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems
This entry was posted in Programming and tagged , , . Bookmark the permalink.

12 Responses to Heavy computing with GLSL – Part 2: Emulated double precision

1. Mike says:

This is quite slick. Do you have any hints on extending this to work with sin/cos, or is that pretty much a matter of look-up-tables (i.e. float textures) or implementing a Taylor series with the basic methods above?

• Henry says:

Check out the the source of the DSFUN90-Library. It contains several methods to compute sin/cos in emulated double precision. Although I haven’t tried these.
The taylor series is easy but it is also very expensive. It also contains some optimized taylor with precomputed tables but that might not be fun to include in a shader program…
Using textures is definitely the fastest method but I’m not sure how to use an emulated double as an index. Also, the texture becomes very large.

I would suggest to use the hardware sin/cos functions with the main part (and the possible float number) of the emulated double and perform (linear) interpolation for values in between.

Let me know which way worked best for you.

2. Mauna says:

Have you test this code also on NVidia-GPU (like Part 1) and which driver version have you used? Because I have test your code on a Win-Vista/NVidia GeForce GTX 260 with driver version 275.33 and there is no difference between the float and emulated double. It seems like the driver optimize the shader code (e.g. the multiplication) to hard so the low parts of the doubles is 0 all the time.

• Henry says:

I have tested the code on a hp Notebook (Elitebook or something) with NVidia GPU and it worked fine. I don’t have the notebook or driver details at hand but I can check next week.
Imho it does not make sense for the driver to alter the shader that much. There must be another problem…

• Mauna says:

Yes, it does not make sense… but if I write all calculations in the ds methods in two lines (for dsc.x and dsc.y) with explicite calls of the float constructor
(e.g. dsc.x = float(float(dsa.x + dsb.x) + … ); instead of dsc.x = t1 + t2; in the add method)
I can see a difference between the float and emulated double precision mode. So that is the reason why I blame the driver to optimize the shader to hard. I have no idea what else could be the reason for this behavior…

The difference however is not so big as in your screenshots. Probably there is a small error in my code because there is a sort of noise in the emulated double mode….

• Henry says:

Mauna, I just checked on a NVidia GPU and get the same problems as you describe. Let’s see if I can find out whats wrong. I’ll keep you posted.

• Henry says:

There is a NVIDIA specific compiler optimization problem here. It is fixed. Source code is updated.
See this post for details.

3. Kyle Messner says:

can’t the multiplication code be equivalently reduced down to this?
vec2 mul(vec2 dsa, vec2 dsb)
{
vec2 dsc;
float c11, c2, t1, t2;

c11 = dsa.x * dsb.x;

c2 = dsa.x * dsb.y + dsa.y * dsb.x;

t1 = c11 + c2;
t2 = dsa.y * dsb.y;

dsc.x = t1 + t2;
dsc.y = t2 – dsc.x + t1;

return dsc;
}

I noticed you’re doing a lot of:
b – (b – c)
which doing simple algebra is equivalent to just c.

Is there some kind of weird stuff going on at the bit level that I’m not understanding or am I justified in these simplifications?

• Kyle Messner says:

just realized it didn’t get quite the same precision like that, so there must be some stuff going on at the bit level I’m not understanding

• Henry says:

You are correct if you use unlimited precision. But since we are limited to floats there is a difference between “b – (b – c)” and just “c”. This is because you can store only 8 digits. If the substraction “b-c” exceed the 8 digits some are discarded, making the following substraction different from the simplified expression “c”.
I know. It’s a bit weird…