The mandatory history bit: How did you end up a mixer?
As with a lot of people in this field, I was into music first. I decided to do something a bit more on the technical side of sound at university so I studied acoustics and audio systems at The University of Salford's Applied Acoustics department and part of that was having a year in the industry. This placement came up at Dolby so I went off and did that for a year.
I got into lots of dubbing studios as part of that placement and after finishing university, went back to Dolby for a year. Every time I went into a studio I saw people putting all this skill and effort into editing and mixing a film's soundtrack and I just thought, "I'd much rather be doing that". So I decided to quit my Dolby job and go for it.
My partner was born in New Zealand so after a bit of travelling we ended up there and I heard about Peter Jackson setting up Park Road Post. My timing was pretty good, arriving halfway through the Lord of the Rings trilogy, so I wangled a job there and dived in.
I started out as a mixing assistant on the stage for Return of the King and King Kong, those sorts of things, and pretty quickly started doing some short films, features, documentaries etc, right up to where I am today, back in London mixing on some really great films.
So what do you enjoy mixing the most: DX, MX or FX?
I mostly mix FX, but really enjoy hopping over to the other side of the desk. I think it's good to also experience mixing DX and MX. Being responsible for each element gives you a slightly different angle on how they integrate with the rest of the mix. Although lots of the process is similar there is definitely variation in skill set and creative challenges. I think the experience of mixing DX and MX has definitely informed what I do as an FX mixer.
Name one film mix which really spins your wheels
I really love it when mixes go for a definite style or feel. I think that's why Gravity won so many people over... it was a bold approach. I think the mix on "No Country for Old Men" felt like it really supported the environment and story brilliantly and I'm a big fan of economical use of music. I also really appreciate hearing how different sound teams find space and dynamic when it's not always what the visuals are doing. More than a few soundtracks have left me fatigued as a listener.
You have a few ATMOS mixes under your belt now, so what's your take on mixing in the new formats?
I think with the first Hobbit film we were one of the first projects to get underway in ATMOS. We were maybe the second full feature in that format and it was such a new format, such a new concept and equipment developers hadn't really caught up, so the consoles weren't geared towards actually mixing ATMOS.
A lot of effort was put into finding workarounds to get the job done without causing too much upheaval to the creative process. Particularly on a project like that, you need to try and keep the workflow that you're familiar with because the timescales are so short and the deadlines come up so quickly. It sort of needed to feel familiar for the mixers, supervisors and the director as well.
You could just take the 7.1 mix and feed some of the surrounds up to the top but that would be a tiny gesture towards meeting ATMOS. So in many cases we actually take it back to raw tracks where we're able to grab individual elements and move them around, pan them through the height channels and move them as objects.
The fact that you've got not just the height channels, but the full frequency surrounds means that you can just fold elements of the music out a little bit from the screen and it doesn't lose anything playing further back in the room.
It's not always appropriate but I've worked on some films where you do have something directly above the listener and traditionally, in 5.1 for example, we'd likely be playing that just in the side surrounds. That would give us a very wide image and suddenly you have footsteps all the way to the left and all the way to the right depending on where you're sitting.
Do you think these new formats are prone to gimmickry like early 3D films or are the directors already conscious of this?
I think it's a mixture actually because some directors are very keen to try and make it a different experience from the 7.1 mix. With the first ATMOS mix I did the director wanted to use not just the full range surrounds, he really did want to move stuff around. It breaks a little bit the original concept of mixing in 7.1 so it did create a different feel, it did lean a little bit towards the idea of it being an immersive experience.
And yes, there can be that kind of tendency for over-cooking and over-using new technology, but I think it just comes down to the sensibilities of the mixers and film-makers as to what adds to the experience and story and what pulls people off the screen, what turns peoples heads, what jars with the listener. We want sound to enhance the storytelling rather than distract!
Object pan or bed pan? Have you worked out any rules-of-thumb yet?
If you're panning elements as objects, they can become very discrete, rather than using an array of six or seven speakers, you're often only passing the sound through one to two speakers at a time. So there is that sense that stuff becomes a bit more separate in the mix and a bit more defined. That's great because it means that people tend to pick out the sounds that are object panned rather than them playing across arrays.
Certain sounds work better than others as objects. I think if you've got a huge alien mother ship landing and you're trying to give it scale and size you might be better to pan it in the 9.1 bed unless you have a lot of individual elements to pan as objects and give it that sense of size.
For the first few ATMOS mixes we had no way of controlling the size of an object so it was always just one or two speakers that it would play out of rather than diverging across a larger number. It was only later that they introduced this idea of having divergence and it's a hugely important addition.
It's been interesting playing with reverbs and delays using objects and arrays. I think of early reflections and delays as naturally more localized and discrete than reverb tails so maybe more suited to objects. Obviously you have to have time to try these things out, but it's nice to be able to look at things with that level of detail.
Are you a big plug-in user?
I don't use a huge array of plug-ins, only ones that I think really add value to the mix process. On this project we've pre-mixed in-the-box, so the sound editors have track laid all the source elements coming out as "food groups". We have them broken down into a number of 7.1 FX buses and Ambience 7.1 buses. There are probably twenty-six 7.1 inputs into the DFC from the Pro Tools pre-mixed source tracks and around 70 inputs to accommodate material split off for object panning using the DFC.
I like the standard Pro Tools ChannelStrip because I've lived for years on the System 5, so I'm kind of used to that. We use Phoenix Verb, because it's a nice, natural sounding verb and seems to be a little more stable than using Altiverb in these big sessions, and then, of course, we've had Spanner on all the source tracks and that's been in place while these guys have been track laying.
They've done a lot of panning on Spanner, then in the pre-mix I just have an iPad to grab pans and address stuff that might need a tweak. These guys are great at judging where stuff should sit, but occasionally I just need to have a little play. In the past I've also used Spanner a lot on the multi-channel outputs from Pro Tools, just to manage pans or adjust slightly the overall image of the 7.1 stems. When it hits the desk and we're final mixing I occasionally adjust a few Spanner pans, but I tend to move away a little bit from the in-the-box thing in the final mix.
And you're cool with the sound editorial crew doing some of this work before you receive it?
It's more and more common for sound designers and sound editors to do that pan pass and put quite a lot of automation onto the tracks before they actually get to me in the pre-mix. More and more we're able to put multiple elements up at once, really hear them together and work out how it plays in a big room, in context with music and dialogue. I can still take any sound, adjust levels, adjust pans, control dynamic, add reverb, change any parameter, but I have more information about what is around that sound to contextualise and evaluate how it's going to play in the final mix.
Sure, there are probably a lot of mixers who don't feel comfortable with the amount of automation coming to them at pre-mix time and feel like there's decisions being made that they should have been party to, but if there has been a sound editor on a project for 6 months before I start mixing then I want them to have at least tried to make those decisions with the director, and we can always re-evaluate them during the mix if need be. The way I think of it is, this scale of mix can be a huge undertaking, so if there's a certain point we can start from which allows us to think less about how to wrangle all this stuff, it actually frees you up to think a little more creatively during the process, which can't be a bad thing.
IMDB Gilbert Lake: http://www.imdb.com/name/nm1944754/