Creating Immersive Atmospheres without Distracting from Primary Content
A fine balance is required when it comes to creating emotive audio, which also providing a bed of sound which does not conflict with SFX or dialogue. This article outlines the processes I take when creating modulating and evolving atmospheres, without distracting from the primary content.
My process is essentially death by a thousand cuts; very small and subtle nudges throughout, rather than a broad stroke. Whilst everything may seem quite detailed, the process itself will become instinctive the more you work at it. I have attempted to articulate my thought process when creating emotive audio. Whilst it may be for just 15 seconds of audio, it could play an integral role in the impact of a major scene in a film or series.
For the purposes of this article, the scene is:
Close-up of astronaut’s face, about to witness cosmic event, 10-15 seconds.
Electronic score throughout, scene demands tension but not horror, cerebral but minimal, stick with ‘space age’ style music.
Choosing your instrument
Complex tools are not required as long as your process is thorough and focused. Throughout this article I will explain how this can be achieved with just three sawtooth waveforms, filters, LFOs and a basic signal path within the synth; in this case Vital.
Many paths can be taken to achieve the outcome, however a minimal approach is best favoured for the following reasons:
i) Time: Sticking with a single instrument for a cue can be a huge timesaver as it circumvents the DAW, complex routing, less window switching, external automation and interaction with other elements. When working on a large project, the minutes add up and you can save you and your team hours.
ii) Limits breed creativity: When forced to use a single tool, your focus remains within that instrument and searching for other options is no longer an option. You will also find that any final flourishes upon completion, with any external effects or sounds, come a lot quicker when just enhancing and not fully supplementing the audio.
Stage 1: Assessment. & Initial Base Drone movement
The emotion we are looking to translate to the viewer, and focusing on throughout, is building tension, relative to the surroundings. In this instance, an evolving drone will build and retrigger upon reaching the peak of the movement. An early decision needs to be made, as to whether proceed with a harmonic or a more discordant layer.
Assessing the setting, space itself lends itself well to a harmonious and ethereal qualities. Nothing is preventing us from implementing harsh, discordant tones to build tension, however, they would be at odds with the vastness and harmonious nature we associate with space. Whilst this can work if we were building a much darker scene and wanted to provide immediate discomfort to the viewer, , this scene is one of amounting tension. Introducing tonal harmonic layers will provide familiarity to the viewer, whilst the evolving nature of the tone will provide the tension required.
In this case, this is achieved with a single sawtooth waveform and a Low-Frequency Oscillator (“LFO”) opening a ladder filter, slowly revealing further frequencies as the scene progresses. The mode of the LFO is set to Trigger, so it re-triggers upon being re-introduced. Were it synced, it would provide a much more flowing drone, morphing more into a pad. This would ultimately detract from the tension we are building and the drone could be interpreted as a melody by the viewer. In this instance, a soundtrack is not suited to the atmosphere we are intending on building, which is why the re-triggering of the LFOs in this patch will be set to Trigger throughout.
Stage 2: Addition of second layer
At this stage, there is a need for gentle complexity to provide the tone with more emotional impact and tension. An additional layer is added, a Fifth note one octave above the primary drone. This is routed via a second filter. To add the tension, the LFO opening the filter has been delayed, allowing the drone to slowly evolve and introduce the unsettling fifth.
The harmony evolving between the original drone and the new layer creates a level of unsettled intrigue, without introducing new elements which would distract from the scene and tension.
Stage 3: Addition of third layer and Unison
Turning towards the sound itself, more width is required to hint at the vastness of the scene itself. We could achieve a more claustrophobic effect by keeping the spatial nature of the audio narrow, or even monophonic, which would be much more impactful to the viewer and provide a sense of being inside the astronaut’s helmet, or mind!
In this case, however, that would be a distraction from the building of tension, with a wider spatial relationship key in keeping the audiences attention on the scene and not the impact a monophonic sound would provide.
Unison is introduced on the 2nd oscillator, alongside a 3rd sawtooth oscillator transposed two octaves above the original drone. This is introduced via a delayed low-pass filter and triggered by an LFO. The signal path itself has compression added, which by its nature provides another subtle nudge towards the tension we have been building.
The compressor is set up so all elements of the drone, no matter how quiet, are audible in one shape or form. This provides the viewer a feeling of relentlessness, without complicating the audio and distracting from the scene.
Stage 4: Final Touches
The drone itself has been completed, however, it is the attention to detail which will bring it to life and ensure the emotional impact we wish to translate to the viewer.
To enhance the spatial effect of the unison we introduced in the previous stage, a chorus effect is added to the signal path. As mentioned previously, we are seeking to avoid cluttering the soundscape with too many elements, and as a chorus can add an element of washiness, this also needs to be incorporated carefully. In this case, whilst the elements are at their quietest at the start of the drone, we start the chorus off with its wet/dry to 0. An LFO is assigned to slowly introduce the effect as the filters all open. This provides the layers with not only width but also glues them together as they build and provide a bit of subtle intensity to the tension.
A reverb has been added before the compressor. The decision to place the reverb here will always be dependent on the nature of both the project and the effects processors being used. This specific reverb and compressor work well together in terms of gluing the high frequencies and providing a sheen and airiness to the sound. For this drone, the effect it provides the tail on the release of the ADSR envelope allows a more shimmering high end to be carried over upon the retrigger. Whilst this is not a technique often used in general mixing, sound design does allow deviating from the norms, to prioritize emotional impact and attention retention.
Whilst the tension has been built, there is more immediacy I wanted to add to the done. As it stands, the drone is suitable for a more serene or emotional type of revelation, however, there is a physical event about to occur and I wanted to introduce a subtle rhythmic element. This subtle break from the droning tones would indicate an upcoming change, which when paired with the tension we have built, would inform the audience of not only the astronaut’s emotion but a change that is due to occur.
Rather than introduce a new element, a morphing algorithm has been controlled via LFO on the first oscillator, which pinches the waveform. The velocity of the low-ordered harmonics of the second oscillator are modulated via the same LFO, providing a rhythmic sensation. A macro is then created, which I called ‘tremolo’, which in itself has been modulated, to provide waves of pulsing to the drone.