"Soundshader"

22a8ea7a6c644627d2263ae8e3faa1f5
0
corlis 101 Jun 30, 2009 at 18:26

I’m curious, if anyone here is aware of some sort of “Soundshading”-Library (-Engine, …).

In comparison to how games (… and similar projects) evolved in optical terms, now using Shaders to tell objects how to look, instead of just splashing an image on it (and thus even reducing the overall size of our gameclients), the development in terms of Audio seems to be more like stalled.

I could think, that a similar approach exists for sounds, but I’m not aware of any kind of project that does that: Create artificial sounds, that we could use to create some soundeffects in a game.

Instead of packing loads of ogg/mp3/wav/…-Files into a game, it would probably be neat to be able to do this, at least for all those different *squeak* *scratch* *stomps* *roar*, etc.

Anyone aware of anything like this?

4 Replies

Please log in or register to post a reply.

F0eedfaa31b7462f8c8b9459b05277bd
0
aamesxdavid 101 Jun 30, 2009 at 19:54

I’m not perfectly clear on your analogy here - it’s worth noting that shaders don’t *create* objects, just modify the way they are presented. This is done with audio engines changing by volume, pitch, EQ, etc. Just about every audio engine can do these things - that’s what they’re for.
What you’re talking about with audio is real time synthesis, which is different, but certainly still possible. It’s generally not practical, however, to generate audio from scratch in a game; more likely you’ll start with a “footprint” sound, and generate sounds from that. Afterall, you need some idea of what this sound is going to be like - typing in audio.generateSqueak() won’t get you very far. ;) Wwise’s “SoundSeed” feature uses this footprint sound to create variations of that idea, thus solving the problems you listed.
As far as generating sounds in the first place, you would need to do that separately to get that footprint sound. If you used a simple real time generation for your sounds, they would sound exactly the same every time - the biggest “don’t” in sound design.
But I guess to attempt to fully answer your question: you could theoretically generate sounds on the fly and pass the synth randomized parameters, but I’d be willing to bet it wouldn’t be worth the effort. Using a footprint sound I think is the most practical way.

Fd80f81596aa1cf809ceb1c2077e190b
0
rouncer 103 Jun 30, 2009 at 20:20

direct sound does offer a few realtime effects to put on waves, this is for extra atmospherics, if the player is in a tunnel for example you can put extra echos on the effects.

If there was some “audio shader” that would mean there would be a place to insert user defined effects that are programmed from scratch.

Its actually a good idea, but digital sound programming is quite advanced mathematics, and coding visual shaders is a lot more new-guy friendly, you can get results right at the start with shaders, but audio is different.

Most people leave it up to their editing equipment to put effects on waves and pre burn all the effects before the game launches, teaching people iir filters is quite complicated maths.

Fe8a5d0ee91f9db7f5b82b8fd4a4e1e6
0
JarkkoL 102 Jun 30, 2009 at 20:32

XAudio2 in DirectX introduces DSP Effects, which they say “…are the pixel shaders of audio”.

22a8ea7a6c644627d2263ae8e3faa1f5
0
corlis 101 Jun 30, 2009 at 20:46

What aamesxdavid mentions with the “footprint”-sounds actually could solve my (more theoretical) problem (at least at the moment - still at planning stage). And I actually forgot to mention what I wanted to achieve, which is

  • Have the ability to ‘create’ similar sounding, but still recognizable different sounds
  • no need to fill the game itself with huge amounts of similar sounding effects
  • change some sounds on-the-fly, rather than have to call the Audio Artist and book a studio for another hour, to get it done…

Still, while I do understand, that the “creation of sounds” is a mathematical thing, I do think, that the same applies for any kind of 3D-Engine anyways, still in many cases we do not constantly rewrite everything from skratch, but use lots of helping-libraries to achieve what we need to achieve. In case of “graphic-shaders” (to whom I refer), we now use tools and HLSL to tell the objects how they have to look like.

Seeing as a computer-music-hobbyist how some companies create softsynths and other sound-processing applications, with partially awesome results, I do actually think, that with the current available processing power, the mathematical calculations should be rather easy to solve (especially seeing, that only few projects utilize the full power of all the cores that are available in many of todays quad-cpu-computers).

Going a bit further in my question comparing the audio-development vs. the graphical development of game creation, I kinda ask myself, what happened with all those applications that we seen in earlier days, where we had fun for hours (ok, maybe minutes), typing in some ridiculous sentences, then hearing our computer talk to us. And still, today, I have to read Quests in an RPG rather than listening to it.

Earlier, I had to read the whole game, but now I can at least see it. (True, there are games, where you wish you hadn’t seen that one…).

EDIT:

@JarkkoL: Sounds good, and is at least a basis for what I am “aiming” for.