So I started a new audio engine, called SoLoud. It’s meant to be a
simple audio solution for games.
It’s available at
- currently the only back-end is SDL, but adding new ones should be
relatively simple, as you just ask SoLoud for N samples when you need
The github page has more technical details, background and stuff.
The only non-barebones feature it currently has is the optional roundoff
Here’s what it sounds like:
Warning: that sample is a bit loud. It plays the sample loop first with
the normal clipper, then with the roundoff one, at 1x, 2x, 3x, 5x and
The roundoff clipper simply compresses louder sounds; quieter sounds are
left more or less alone.
It sounds fine, to me, at even 3x, and not completely broken at 10x
(100x does, however, get rather noisy).. so unless you’re an audiophile,
you can just switch the roundoff clipping on and forget about audio
Oh, and because every project needs a logo..
Please log in or register to post a reply.
Y’AAL - Amerikan for “yet another audio library” :) But SoLoud works I
Just a couple comments:
1. Why use SDL? I prefer a minimalist approach and so I use DirectSound
for Windows and OpenAL for everything else. Both keep my packaging down
to a minimum and offer me enough to send and capture audio. OpenAL
supports a lot more platforms and its buffer management is kind of neat
for streaming audio packets. It synergizes very well with the ogg
bitstream format. Of course, Creative did a terrible terrible job
managing that project. OpenAL Soft breathed fresh air into the project,
but it’s a shame its licensing is quite a burden.
Do you have plans to add in audio streaming, synthesis, and DSP? It
seems like you’re kind of edging towards that. I implemented a
synthesizer with VST and MIDI support and I swear by its design. It’s
extremely flexible to add filters, tracks, effects, and virtual
instruments. An easy to use open library that provides that kind of
support I could see going far.
Some games may play and forget sounds, but I think that totally kills
the immersion. I know a lot of frameworks document this practice, even
when their framework supports audio instancing and streaming. IMO, I
think it’s important for game devs to treat audio seriously and put some
effort into actually getting sounds to mix properly and do other nice
effects like cross-fading. My only advice would be to add support for
that and demonstrate how to manage audio instances, mix them together,
and stream music or other long sounds. Encourage that practice over the
“set and forget” approach. If it’s about making the library easier to
use, just write some helper classes to minimize boilerplate code in the
I use SDL, so that’s the backend I added. Other backends are possible
(and should not require changes in SoLoud proper). As for OpenAL, I
don’t like it for some reason.
I structured SoLoud so that streaming, synthesis etc. is possible.
Currently SoLoud is something like a thousand lines of code, so
naturally it doesn’t have everything in it. I primarily wanted to have a
system that doesn’t have silly “one channel for music” limitations..
Timing-wise, SoLoud is too inaccurate for playing music - sounds trigger
at the start of the next mix call.
Oh well, that’s a completely different system - for now, I just want
something simple that doesn’t get in the way. What you’re describing
basically starts from having 3d sounds, and suddenly we’re into OpenAL
all over again..
Sound API’s are my pet hate.
I have never come across one that is simple to use, yet flexible enough
to be useful.
The one I have to use at the moment always ends up with me skyping lot’s
of swear words on the technical channel.
It’s taken me 18 months to get them to realise it’s crap and start
thinking about an alternative.
You don’t need 3d audio support for streaming or instancing sound
buffers. Instancing is something that should be supported at the audio
API level. DirectSound and OpenAL have their ways, but I’m sure SDL does
too (especially since SDL uses DirectX under the hood). Like VBOs, you
only upload the audio data once, but create multiple instances to mix
during playback. For example, you have a game that plays a sound when
you pick up an item. Two items back to back are picked up and you hear
the first sound play only for it to cutoff when the second plays. It’s
kind of tacky. With instances, you play the same audio twice and the
mixed result produces an accurate playback of what happened. Some
frameworks like XNA would garbage collect that for you in a “set and
forget” type way, but it’s still important to provide control so as not
to overflow the mixer (ie: set max simultaneous sound playback).
Also, what do you mean your library is not accurate enough for music?
From a streaming perspective, as long as you keep the buffer full every
X milliseconds, you’re good to go. Even a lazy clock() check is accurate
enough for a 60 msec buffer. You can use timeGetTime (Windows) or
gettimeofday (Posix) if you need something more precise for low latency
Erm, of course there’s ‘instancing’. Wouldn’t want it any other way. If
you read the description on the GitHub page, you’ll find description if
the channel picking strategy.
There’s no reason to store the sample data on the sound card or
whatever, and mixing is done in software. Even with my naive
implementation, mixing of 100 concurrent sounds only takes a couple
percent of CPU on my (non-bleeding edge) rig.
As for timing, whenever the low level (sdl in this case) asks for
samples, which usually happens in 2k or 4k increments; I trigger new
sounds on these boundaries, which, IMO, is too inaccurate for music.