I am a newbie to DirectX world
I am using latest DirectX SDK with Visual Studio 2005 and using C# as
programming language I have checked the samples and they are working
I have an custom made external sound card (connected with USB) that has
2 inputs, 1 for left channel and 1 for right channel, DirectX SDK’s
Capture example is working perfectly with it capturing the sounds of
both inputs and saving them in a wav file, now I want to save left and
right channels’ sounds in seperate files
I think it must be pretty simple but I have no clue about it
Thanks in advance
Please log in or register to post a reply.
The data you got from directsound is interleaved. E.g. for each channel
you get one sample, left channel first. All in one stream. That makes
sense if you think about it: the two channels are sampled at the same
time from the input, so they should arrive at the application at the
same time as well.
In your case all you need to do is to split those two channels with a
simple loop and write them to two mono wav-files instead of one stereo
Thanks a lot your reply solved my problem
I too am trying to capture the stereo stream to two seperate wave files
using the C# sample in the DirectX SDK. I am not quite sure what Nils
Pipenbrinck means when he says “split those two channels with a simple
Any assistance you can provide would be greatly appreciated.
The channels are interleaved. Like Nils says, there is a sample from the
left channel, then a sample from the right channel, then another from
the left, then another from the right, etc. You just write a loop that
goes through pairs of samples and writes even-numbered ones to the left
channel wave file and odd-numbered ones to the right channel wave
In other words. The left and right channels are multiplexed in the same
sample stream. So you’ll need to write a demultiplexer.
To visualize the multiplexed stream (each sample-set is wrapped in 
and left and right channel are indicated by L and R respectively), so
the incoming stream will probably look like this:
Each letter (L & R) here represents one sample. So all you need to do is
loop over the entire input and split them out.
Keep in mind though that each sample is for example two bytes (assuming
a system that has 8bit bytes here) long in case of 16bit samples.
my question is somewhat related to this one and goes like this:
I’ve written a 3D-Sound system that uses Direct3DSound using c++. I have
EAX support and everything works fine.
BUT: This is a realtime app that allows users to move interactively but
also save screenshots (even high-res ones by using many tiles that are
combined later on) using predefined camera-paths and write out an avi
file. The sound effects can be triggered by different actions like
proximity, switches, … . Since the high-res rendering sometimes cannot
be performed in real-time (a lot of antialiased tiles have to be
rendered offscreen and written to disk), I came up with a two-pass
method for capturing sounds:
(1) only render the scene (be it tiled or not) to the disk and disable
(2) turn on the sound effects, but don’t render any visuals and follow
the camera-path in real-time while capturing the sound-output using the
“what u hear”-pin and write this to the avi-files sound track.
This works, but only with two-channels (left/right). I’d like to capture
all channels (e.g. 5.1 sounds) and create avi’s with that many channels
using the WAVEFORMATEX format.
So the question is: How can I capture more than 2 channels? Is it
somehow possible to capture each channel seperately?
Any help is appreciated, cheers,
Why do you need to capture the sound if your application is generating
it? Can’t you just compose the soundtrack by placing the sounds at the
appropriate times and doing software mixing?
Thank you Reedbeta and MuggenHor. I now have my two seperate files, one
left channel and one right channel by doing just as you explained.
probably I was not so clear when describing the problem:
The application (sort of a game engine, but different) is generating
sounds/effects using DirectSound/Direct3DSound and EAX-effects for
reverbation, echo, … . The sound files are simple wav, ogg or mp3
files. These sound effects are placed in the 3d scene or linked to
moving objects. An EAX environment is generated in a preprocessing step
(analyzing the size of the different rooms, …) and while the user
navigates inside the 3d scene, the actual environment is determined to
give the correct “ambient feedback”.
The sounds themselves start/stop/loop according to the proximity of the
virtual camera to these objects or by user-actions or by time, … .
While this all works very good when achieving real-time frame-rates
(like navigating in the scene) the problem is capturing the sounds (or
soundtrack) to an avi file while rendering a lot of still frames. In
this case the camera is moving along a predefined path and effects are
triggered just like in the interactive mode. The camera movement is not
real-time because of antialiasing techniques and the high resoulution.
In my previous post I mentioned the workaround to this problem (sort of
ugly but I can live with it), but I can only record stereo (2 channels).
So my questions are:
(1) Is there a way to capture the samples (like in the
sampleGrabber-demo) in a one-shot mode (run sounds –> capture –>
do_something_that_takes_a_long_time –> get next sample) while
still getting EAX effects and more than two channels?
(2) If (1) is impossible, how can I capture the output of just one
channel (say back-left). This way I could write each channel seperately
to the combined avi-soundtrack.
(3) What exactly do you mean by software mixing? I know how to get
hardware acceleration in DirectSound, but I thought switching to
software mixing would decrease quality/available effects and of course
disable EAX effects. (CPU usage is not important when writing the sounds