Skip to main content

(Good) Sound design

Let’s break it down even further and focus for a bit on just the ‘sound’ part of soundwalk. As the name suggests any possible sound can be used in a soundwalk. There is no bias towards music, sound effects, spoken word or any other category of sounds.

This gives you an infinite amount of possibilities, which can at the same time be very liberating and perhaps also a bit scary, because how do you make a choice when basically anything is possible?

Of course specific categories of sound are traditionally associated with specific purposes. Spoken word might be the easiest way to relay the information that a certain site was a textile mill back in 1873, while a sound effect of the grinding of the mill might be the best way to make the walker feel like they are actually then & there and a bit of music instantly conveys the feeling of the grim, hard life the workers might have lived.

However there are no laws or a sound police policing to what end you use a sound. Even within just the traditional Western canon people like Luigi Russolo, Pierre Henry, Pierre Schaeffer, John Cage and many many more have extended the boundaries of what was traditionally considered ‘music’ to include any possible sound and not just what can be drawn as a score and played by a conventional instrument. Hip Hop and electronic music popularised the phenomenon of sampling and just about any kind of ‘noise’ is in this day and age considered a musical genre instead of an unwanted byproduct of ‘true’ composition.

Parallel to this there are almost infinite ways to shape a sound. And it can be done by anyone. Technology available on average consumer devices brings possibilities that used to be accessible only to the fortunate users of high end studios or research facilities. The result is that we not only have a huge amount of sounds to choose from for our walks, we also have the possibility to make them sound exactly like we want them to sound. This is a process we can refer to as sound design.

Consider something basic like a character leaving in an audio segment you are making. You might have chosen the sound of a door closing as a simple way to relay this information. Even within a seemingly simple bit of sound design you have a ton of possibilities to tell something with this sound. Is the door closed in anger, very casually, by someone trying to be as stealthy as possible? Open and close the door nearest to you in these different ways and you will hear there is a big difference. And there’s more: is it a wooden door, light or heavy, well maintained or creaky? Is the volume realistic in relation to the other sounds in your sound design or does it stand out because it’s especially loud or quiet? How does it blend in with the background of the other sounds you have going on? What does the shutting of the door sound like in terms of acoustics, like a big hollow space is closed off or a small room full of cushioned furniture? Is it sharp like a gunshot or does it slowly but steadily fall shut like a very heavy object?

You don’t have to answer all of these questions for every sound in your work, but it’s good to realise that you have control over almost all of these parameters if you want to. The recording itself you can either make if it’s something easily accessible like this, or get it from a place like freesound.org or a commercial sound effects library. However chances are you will find something that’s close but not exactly what you need. This is where you can start to use a wide range of common sound design techniques:

First of all there’s the most basic parameter when working with sound: volume. Of course you don’t have unlimited control over this, since listening to a soundwalk can happen at the volume the walker sets on their device. But you as the creator have control over the relative volumes: what is the loudest, and what is the softest? The difference between these two levels is called the dynamic range and can be a very powerful tool. First of all you want to make sure that the things that need to be loud are actually loud. But if you make everything loud of course the loudest parts don't really stand out much anymore. So it’s important to also think of an average volume. And consider what the lowest volume could be, for things that you want to be there but not draw too much attention too. A low volume can also be a tool to draw a listener in on a certain bit since it takes a bit of extra effort to understand. Our human hearing is very sensitive to relative volumes, so if you want something to feel really loud a lot can be done by making the section before rather quiet instead of just looking for the absolute maximum volume. You can compare it to the classic ‘jump scare’ in a horror film: the bit just before is always really quiet and unremarkable to make the sudden scare extra terrifying.

Of course volume is sometimes more complicated to manipulate when a sound for example in itself has a loud start (the ‘attack’ of the sound) but goes very quiet after the beginning. And you might want to hear more of the relatively quiet part without the loud part becoming overbearing. Here tools like compressors, limiters, expanders, transient shapers and other processors that fall in the so-called ‘dynamics’ category can help you. A compressor for instance can lower the level of the peak of a sound without touching the lower volume tail of it. Which means you can bring the volume of the whole sound up without the peak getting too loud. Theoretically you could also manually do volume edits like that in an audio editor, but it would take almost microscopic precision to get the timing right and you can see how a tool to automate this process can be much more precise or at the very least save a lot of time.

When the volume of a sound is to your liking another parameter you will find yourself tweaking often is the tonal balance. The tool to adjust this balance is called an equaliser - which is basically a set of frequency dependent volume controls. For example you might have recorded the sound of a glass breaking. The volume is basically right but there is one very specific tone which makes it simply too sharp / shrill / intense to listen to. Here an equaliser can help you to specifically change the level of the frequency of the sound that is too sharp without meddling with the rest of it. In sound design the use of ‘EQ’ can also be very extreme, where you for example leave out everything except for the low end of a specific sound.

Which brings us to another staple of sound design techniques: layering. Up until now you might have assumed you needed to find or record the perfect sound, or at least close enough to perfection to use it with some basic corrections. However many many sounds are designed by layering multiple source sounds. You might have a great recording of some thunder, but made from relatively far away. So it has a nice ‘crackle’ to it, but not really the ‘boom’ you also imagined for a big bold thunderclap in your sonic story. Instead of continuing the search for the perfect thunder recording that has both, you can also use another recording with a better ‘boom’ (but no nice ‘crackle’) and layer the two. Or even mix in the ‘boom’ from something else entirely, like a kick drum with a long tail from a synthesiser.

A bit more advanced but still extremely common is the adding of electronically generated reverberation to a sound, mostly referred to as reverb. Every sound in our world, with the exception of those produced in an anechoic (literally ‘without echo’) chamber, produces a form of reverb, which is the part of the sound that goes on after the original sound has passed. This going on of the sound is caused and shaped by the acoustics of the space in which the sound is produced. Sometimes it’s very noticeable, for instance when somebody coughs in a cathedral and you hear the reverb of the cough for seconds after the original sound has completely stopped. In a tiled bathroom the reverb time might be quite short but have a noticeable effect which might be part of the attraction of singing in the shower. With every sound recorded we also always hear the space it was recorded in because of the reverberation. Often very subtly, sometimes very clearly, but it’s always there. It’s difficult (but not technically completely impossible) to remove reverb from a sound, but you have a gazillion options for adding it. Apart from obvious things like cathedral effects you can use this as a powerful tool to make different sounds sound like they are in the same space. Or to make it very clear that something is in another space in the moments you need to convey that information.

Where reverb tells us something about a space, a concept called sound spatialisation tells us something about where a sound is in space. A spatial sound format almost everyone is familiar with is what we call stereo. Which indicates a sound is played back over a set of two loudspeakers. (Two very tiny loudspeakers in the case of earphones.) The number of two is not random but is related to the two ears we have as humans. A combination of volume and time parameters, i.e. at which ear does the sound arrive first and which ear perceives it as louder, allows us humans to hear where a sound is coming from. When you take a mono recording (which means it was made with 1 single microphone) and play it back in stereo without any processing you will hear it in both ears equally loud. This suggests that the source of the sound is straight in front of us. By adjusting the so-called panning of the sound you can make it louder in the left loudspeaker which creates the illusion that the sound is to our left. This technique of mixing sounds not just by loudness as you do when working in mono but by spatial positioning in the soundfield is very powerful, since it literally allows you to work with the spatial concepts of width and depth. Regular stereo panning only affects volume, but a time difference can be made by adding tiny amounts of delay to sounds that are further away or using a dedicated sound spatializers that does this time & volume processing based on where you place sounds in a virtual room.

An important side note is that in a soundwalk the audience is of course free to move around as they please. Which means that you can’t always be sure how their left and right ears relate to their positioning in the physical world. If it’s important that a certain sound is connected to a certain physical object or place Echoes has you covered with a so-called 3D audio function. With this function you can designate a specific GPS position as not only the trigger but also the perceived source of a sound. When a walker has this position to their left they will hear it mainly in their left ear, and vice versa. For example you can connect a sound to a statue in a city square. Where a ‘normal’ Echo could start a bit of sound when reaching the statue the 3D function can help in suggesting that the sound is coming from the statue: when you move closer it gets louder, when you move back it gets softer and when you stand sideways you can hear whether the statue is to your left or to your right.

Another very basic sound design method is the editing itself, cutting the sound to exactly the parts you want, at the moments you want. Reverb can also be very useful here, when you need to make an edit that is so sharp that you can hear that a bit has been cut off. A reverb can give the sound in question a ‘tail’ once again so the cut ending feels less unnatural. Or when layering sounds sometimes another sound happening at the same time can mask an otherwise noticeable cut.

Reversing a sound is also a very basic technique which has been around since the days of audio tape. Many classic ‘woosh’ sounds with a sudden ending (a cinematic classic) are for example cymbals or similar sounds with an initial large peak (the ‘transient’) and a long ringing out, played in reverse. Changing the playback timing to be slowed down or sped up compared to the original recording speed is also a staple in the sound design toolbox. In the digital era the amounts of slowing down or speeding up can be extreme, and many algorithms also offer the possibility to do this while keeping the original pitch. ‘Classic’ analogue or analogue inspired changing of the playback speed will also affect the pitch, such as the well known effect of a voice being slowed down but also becoming much lower in the process. Pitch changes that don’t have an effect on the playback speed are also possible in our day and age, and used in famous examples like the AutoTune set of audio tools.

This is just a rudimentary glance at the multitudes of sound design techniques and tools out there. But the basic parameters of a sound (dynamics, tone, timbre) are the ones you will find yourself tweaking over and over again, with whatever sophisticated or basic tools you have at hand.