Very cool trick with the shifting!

I just might go through my code and see if I can’t use it somewhere :)

Couldn’t you do the reverse for half-to-float conversions? Now, there is no NaN or denormal checking in this code, but it works when the values are valid, albeit with some lost precission :happy:

```
static __declspec(align(16)) unsigned half_sign[4] = {0x00008000, 0x00008000, 0x00008000, 0x00008000};
static __declspec(align(16)) unsigned half_exponent[4] = {0x00007C00, 0x00007C00, 0x00007C00, 0x00007C00};
static __declspec(align(16)) unsigned half_mantissa[4] = {0x000003FF, 0x000003FF, 0x000003FF, 0x000003FF};
static __declspec(align(16)) unsigned half_bias_offset[4] = {0x0001C000, 0x0001C000, 0x0001C000, 0x0001C000};
__asm
{
movaps xmm1, xmm0 // Input in xmm0
movaps xmm2, xmm0
andps xmm0, half_sign
andps xmm1, half_exponent
andps xmm2, half_mantissa
paddd xmm1, half_bias_offset
pslld xmm0, 16
pslld xmm1, 13
pslld xmm2, 13
orps xmm1, xmm2
orps xmm0, xmm1 // Result in xmm0
}
```

Hi all,

SIMD operations are indispensible for high-performance computing. For the latest generation of x86 CPU’s, SSE even quadruples the floating-point performance, making a Core 2 Quad at 3 GHz more powerful than a Geforce 8600 GTS. So it can really pay off to convert performance-critical scalar code into vector code.

Unfortunately, compilers are still not adequately capable of generating SIMD code themselves. A first reason for this is that not every scalar operation can be translated directly to a vector equivalent. One of these is the shift operation. You can shift every element by the same shift value, but not by independent values. A second reason why writing SIMD code is difficult is that you can’t branch per element (or at least not without losing the benefits of SIMD). In this Code Gem I will address both issues to convert 32-bit floating-point values to 16-bit floating-point values.

Shifting four 32-bit values in an SSE register by independent values in another SSE register won’t become available as a single instruction until SSE5 appears in late 2008. Fortunately there’s a trick to do almost the same thing with good old SSE2 (available since Pentium 4’s and Athlon 64’s). Shifting an integer value is the same thing as multiplying (or dividing) it by a power of two:

x << y = x * 2\^y

So if we can convert our shift values to this power of two we just have to multiply to perform our shift operation. For this we can (ab)use the IEEE-754 floating-point format. It consists of a sign bit (s), 8 exponent bits (e), and a mantissa (m), and represents the following number:

s * 2\^(e - 127) * m

Note that it performs a power of two operation. By setting the exponent field to y + 127, and the sign and mantissa both to 1, we get the value we need in floating-point representation. All we have to do next is convert to integer and multiply. But there’s another issue here. There is no SSE instruction for multiplying four 32-bit integer numbers. There is one for floating-point numbers though. So instead we convert our values to be shifted (x) to floating-point, multiply in floating-point format, and then convert back to integer. Let’s sum this up in code:

(Note that this code is aimed at Visual C++ 32-bit)

What about precision and such? Converting an integer into a floating-point number can be done without losing any precision, as long as it fits into the mantissa field, which is 23 bits long. And because of an implicit 1 and the sign bit you can use signed integers up to 25 bits long. Anything that falls out of this range will loose some of the lower precision bits, which may or may not be critical for your application. Also, this version only works with positive shift values for shifting left, and when the result overflows it returns 0x80000000.

So it has several restrictions but for the situations where it’s usable these five instructions are likely the fastest solution. Where I found this approach particularly useful is the parallel conversion of 32-bit floating-point numbers to 16-bit floating-point numbers (also known as the ‘half’ type).

This conversion has to deal with three cases: infinity, normal values, and denormal values. It’s the denormal case where the SIMD shift comes in. Dealing with the three cases without branching can be achieved by masking (AND operation) and combining (OR operation). I’ll save you from the details (but do not hesitate to ask questions) and immediately present the code:

A Core 2 Duo executes this code in about 14 clock cycles, or 3.5 cycles per element, which is clearly hard to beat with scalar code. Conversion from half to float can be done with a lookup table (one element at a time; SSE does not support gather operations yet).

So I hope this inspires more people to explore the world of SIMD computation. This particular example might be useful for image processing, ray tracing, compression, etc, but the techniques have a much wider applicability. So feel free to discuss SIMD and especially SSE below.

Cheers,

Nicolas

P.S: SSE5 will include float-to-half and half-to-float conversion in a single instruction…