0
101 Nov 08, 2008 at 00:00

Hi!
For a school project I must apply a filter to a sound.
I use OpenGL for draw the sound emitter, and for example I can use OpenAL for the sound processing. Is possibile with OpenAL modify the sound data on real-time (with a my function)? Or I must use another api?
Thanks!

#### 10 Replies

0
149 Nov 08, 2008 at 05:11

Yes. All you need to do is resubmit your wave data into the buffer. You could also use a streaming approach where you will be notified to make an update, and rather than supply the raw wave data, you can apply your filter before uploading it into the buffer. So long as your algorithms aren’t CPU intensive, you should be fine.

0
101 Nov 08, 2008 at 10:13

Ok thanks.
I’m new to OpenAL so I’ve another question.
I’ve seen OpenAL documentation and I’ve found this series of functions: alGetBuffer().
For example I’ve got frequency of sound or size of sound (the .wav file is on a buffer). Now, how I can acces to the data buffer? I must modify this data on a cicle (for).
Thanks.

0
149 Nov 08, 2008 at 14:25

Well, you have the wave data in system memory and apply your filter there. You then upload that into OpenAL. Or if you stream (for better performance), then you load in “X” seconds of wave data into system memory, apply your filter, and then upload that to your next stream buffer. I never had to extract the wave data from OpenAL, although it does use DirectSound on Windows and it is possible with DirectSound to get the wave data from the sound buffer, so I’m sure it’s supported in OpenAL somewhere. Not sure why you would want to go through all that difficulty though. You may also create “hiccups” in the sound playback when you do that because accessing the sound buffer requires a lock on the memory. It’s best to avoid that altogether.

FYI, the GetBuffer methods are for retrieving information about the sound buffer, such as its frequency and size. It’s not used to actually get the wave data. I did a quick check and I didn’t see any API calls to get it, but maybe someone else may know. Again, I suggest you work with system memory instead. It’s easier and more efficient.

0
101 Nov 11, 2008 at 20:37

Hi.
So data must filtered independently?
For example, I’ve tried this library http://www.mega-nerd.com/libsndfile/api.html#write, with this I load wav file and I can get the sound data (in double format), I modify this data and then I send it on OpenAL?

0
149 Nov 11, 2008 at 23:39

So data must filtered independently?

Most things, yes. OpenAL does come with some post-processing effects like setting up 3D positional audio, velocity, doppler, min and max ranges, cone of influence, etc… But when it comes to more sophisticated filters like equalizing, envelopping, and high quality reverberation you have to rely on modifying the content before uploading it into OpenAL.

It’s worth mentioning that EAX also allows a few levels of hardware accelerated reverberation, so if you have a Sound Blaster sound card you may want to investigate that library. I believe OpenAL is starting to support it? I’m still on the older version, but their new website was talking about it.

0
101 Nov 15, 2008 at 14:51

Hi.
I’v tried another library: SDL_mixer.
This library have an interesting function: Mix_RegisterEffect() this is the documentation: http://jcatki.no-ip.org:8080/SDL_mixer/SDL_mixer_frame.html.
With this function, I can work with the audio stream. So, I’ve tried this code:

void hrtf(int chan, void* stream, int len, void* udata)
{
delay[0] = 28;
delay[1] = 1;

for(int i = 0; i < (audioLength / 2); i++)
{
audioTemp[i*2] = audioData[i*2];
audioTemp[i*2+1] = audioData[i*2+1];
}

for(int i = 0; i < (audioLength / 2); i++)
{
if(i < delay[0])
{
audioData[i*2] = 0;
}
else
{
audioData[i*2] = audioTemp[(i*2) - (int)delay[0]];
}

if(i < delay[1])
{
audioData[i*2+1] = 0;
}
else
{
audioData[i*2+1] = audioTemp[(i*2+1) - (int)delay[1]];
}
}
}


I would delay the left channel of 28 samples and the right channel of 1 sample.
The result obtained is not the same of “static version”, and the channel major delayed (left channel) present some rumors.
Some suggestions?
Thanks.

0
149 Nov 15, 2008 at 15:31

Shifting by 28 samples isn’t a whole lot. It’s likely inaudible with a 44,100KHz wave. Secondly, you’re clearing the stream bytes to 0 at the beginning and reassigning that 0 later in the loop. This mixer would effectively silence out your sound. Thirdly, you’re shifting the stereo channels incorrectly. Wave data is treated as follows:

Assume an 8bit stereo audio stream at 44,100 samples per second.

data size:   1 byte * 44,100   1 byte * 44,100
<left_channel>    <right_channel>    <left_channel> .......


Some other debugging tips:

1) Try something simpler. Divide each value in stream by 2 (effectively reducing the volume). See if that works. If it doesn’t, perhaps SDL_mixer is not correctly handling your wave or something somewhere else is wrong.

2) Here’s a much more efficient way to shift channels.

// Get the total number of sound chunks
unsigned int blockSize = bytesPerSample * samplesPerSecond;
unsigned int numBlocks = (sizeOfWaveInBytes / blockSize) / 2;
unsigned int blockShift = 100; // Must be an even number for left channels

// Shift the left channel first
for (unsigned int i = 0; i < (numBlocks - blockShift); i += 2)
{
memcpy(&stream[i * blockSize], &stream[(i + blockShift) * blockSize], blockSize);
}


As an exercise, you will need to create another loop to clean out the last part of the stream’s left channel data and repeat the step for the right channel data.

0
101 Nov 15, 2008 at 16:37

@TheNut

Shifting by 28 samples isn’t a whole lot. It’s likely inaudible with a 44,100KHz wave.

This delay is given from a function that I must utilize, this is the function: http://interface.cipic.ucdavis.edu/CIL_tutorial/3D_sys2/Models/Mod_ITD.htm
@TheNut

1) Try something simpler. Divide each value in stream by 2 (effectively reducing the volume). See if that works. If it doesn’t, perhaps SDL_mixer is not correctly handling your wave or something somewhere else is wrong.

It runs correctly. For example, I’ve divided each value in stream (corresponding to the left channel) by 2, the volume of the left channel was reduced.
@TheNut

2) Here’s a much more efficient way to shift channels.

// Get the total number of sound chunks
unsigned int blockSize = bytesPerSample * samplesPerSecond;
unsigned int numBlocks = (sizeOfWaveInBytes / blockSize) / 2;
unsigned int blockShift = 100; // Must be an even number for left channels

// Shift the left channel first
for (unsigned int i = 0; i < (numBlocks - blockShift); i += 2)
{
memcpy(&stream[i * blockSize], &stream[(i + blockShift) * blockSize], blockSize);
}


This is the data:
I read the data in this format: “AUDIO_S16SYS”, so, I’ve 16-bit (2 bytes) per sample. Samples per second are 44100.
- blockSize is 2*44100
Stream is a pointer of size “len” bytes.
- numBlocks = (len/2)/2

If I runs your code, application crashes. The problem is memcpy.

0
139 Nov 15, 2008 at 22:00

@TheNut

Assume an 8bit stereo audio stream at 44,100 samples per second.

data size:   1 byte * 44,100   1 byte * 44,100
<left_channel>    <right_channel>    <left_channel> .......


If you’re saying that it’s 44,100 bytes of left channel and then 44,100 bytes of right channel, that’s wrong…it interleaves channels with each sample, not in blocks.

0
149 Nov 16, 2008 at 00:48

My bad. I must have thought about something else. Though enigma should have picked that up! :angry: Shame shame…