Light vs Sound.

Alienizer 109 Jun 08, 2012 at 21:55

This may be a stupid question but, I’m working on real sounds in my game, and I was wondering if the sound from a distance is attenuated the same as light 1/(dist*dist) or is there a special formula for sound? I know that sound in space in nil since there are no air particles, and sound is not as loud in very high altitude due to much less air density. So I’m talking strictly sea level sound propagation. Any idea?

10 Replies

Please log in or register to post a reply.

Vilem_Otte 117 Jun 09, 2012 at 12:31

Maybe try looking at this - Stokes law for sound attenuation (you can google also some physics books ofc) -’_law_(sound_attenuation)

Anyway I’m sure you can go with some ‘constant / (dist * dist)’ as well (maybe with some fine tuning) … OpenAL afaik does something similar to hack the propagation.

For complete sound propagation you’d need sound reflection, correct rabbit-hole propagation, etc. etc. - e.g. doing path tracing for sound (that would be :ph34r:-like and it would really need looking deep into physics books) … but imo total overkill and maybe less visible than focusing the resources on graphics :D - especially if you know that most people just don’t have enough quality speakers for this stuff.

Though I don’t say it wouldn’t be worth of trying - if I’d have the time, i’d definitely try it B)

Alienizer 109 Jun 09, 2012 at 15:05

ha! doing a path tracer for sound! wow, I’ve never thought of that one, and I got to try it, that should be fun!

Thanks Vilem.

Vilem_Otte 117 Jun 09, 2012 at 17:06

Actually, thinking about it (for few minutes)…

That algorithm could be quite easily done to actually be real time w/o some heavy effort. It even could be optimized to actually do the whole stuff on single core of CPU (which means that it could be game applicable).

So basically you’d have scene geometry (lets say just static, no dynamic now - though extending it would be quite easy with dynamiccaly re-building acceleration structure for ray tracing), sound sources (with info like position, volume, etc.) and listener.

I’d basically do it in very simple way - using bi-directional path-tracing to compute sound propagation. The algorithm would be 2-pass one:

1st Pass
1.) Generate ray directions at sound source position (generated in directions with probability function that would describe F.e. direction of sound source)
2.) Path trace these, storing hit points as "Virtual sound sources" (with all parameters, also with "time-offset" effect - e.g. rays would have time parameter - you'd get "sound motion blur").

2nd Pass
1.) Generate random ray directions from listener position
2.) Path trace these, plus every step (even the first one which starts at listener) calculate "sound visiblity" to "every" sound source position (every means every reachable from hitpoint - optimization).
3.) Mix the sounds together - getting correct result (don't forget to compute with Stokes equation for sound propagation)

It could be extended also with sound propagation through solid materials (in the Pass 2 2. phase - with quite simple code extension - getting all hits along ray and material parameters (of course until it hits the source)).

Totally you’d need a lot lower number than I get for realtime ray tracing per single core - so actually making it realtime would be possible on single core of CPU.

Reedbeta 167 Jun 09, 2012 at 21:56

For path tracing sound I’d expect you’d want to keep track of the phase of the waves as well, since audible sound wavelengths long enough (anywhere from \~2 cm to \~20 meters) that phase differences and interference/diffraction effects would be important. Fortunately sound waves are longitudinal so there is no polarization to worry about. :) Also, note that since the Stokes’ law depends on frequency, to realistically attenuate the sounds you’ll have to incorporate an equalizer or similar pass that can independently apply attenuation to different frequency bands.

__________Smile_ 101 Jun 09, 2012 at 22:54

Also keep in mind that every surface is like mirror for sound, not diffuse, so general path tracers works incorrectly. I think it’s better to solve wave equation directly on some sparse grid (about 1m cells).

Alienizer 109 Jun 10, 2012 at 02:27

WOW, it works incredibly well and sounds much better than commercial games!

Each characters have a sound and a loudness, as in light color and power. It also have a normal which define the direction of the sound it emit, but it’s 180 degree feathered.

Each material have an acoustic absorption value, and of course, the reminder is reflected. Like dark colors absorbs more and reflect less. No diffuse tho.

For each audio channel (speakers)…

  1. get the global position in the scene, and, the normal of each one which define the direction it is listening.

  2. shot random rays out into the scene, and for each rays…
    a. if nothing hit, do next.
    b. cast ray to each audible source
    c. if it hit, get the sound * constant/(dist*dist) * (1-material_absorb)
    d. next

The above produces stunning results. I have modified it to do more depth like a path tracer, and I’ve included real time propagation of sound, then wow, it is absolutely amazing. When you move the character, the sound is so 3D, it feels so real, you know exactly where the monsters are!

Does this already exist? Or did I simply made a money maker thing public domain? Because none of the commercial games we play have real 3D sound.

macnihilist 101 Jun 10, 2012 at 07:50


Does this already exist? Or did I simply made a money maker thing public domain? Because none of the commercial games we play have real 3D sound.

I don’t know about games, but in research ray tracing and other ‘graphics’ algorithms have been used for quite a while to do sound rendering.
My former employer, for example, had a group working on this stuff, google for ‘sound tracing’ and ‘phonon tracing’.
There are also real-time approaches.
So the idea is certainly not new, but packaged in a cool library it could be a nice thing.

Alienizer 109 Jun 10, 2012 at 14:47

So I did not invent anything then. [img][/img]

Vilem_Otte 117 Jun 10, 2012 at 23:41

That smiley is just EPIC! :D … almost as epic as chibi Zaraki Kenpachi:


I actually never seen *solid* library doing this-like stuff and I’m sure lots of people would definitely like to use it (incl. me for example).

Of course it might need lots of work to do (because users will put insanely large meshes into it, etc. - so you won’t get far without fast KD-Trees or (Q)BVHs for example … of course one can use F.e. Luxrender as a library for this to solve the problems … or create own solution).

vdf22 101 Jun 15, 2012 at 16:49

FMOD does this:

Custom geometry engine to add polygon scenes (FMOD Ex will factor in obstruction/occlusion)

Basically you give it a scene and it will perform calculations for you.