Right then, in the past two issues of Making Tracks in Guitar Interactive magazine we’ve looked at what we really mean when we talk about Phase and Polarity, and how they might matter when we want to record a single thing to one or two (or more) tracks. With a little care at the recording stage it’s fairly simple to get a good solid multi-track recording, and if it does go a little wrong it’s usually possible to fix it-up by editing tracks afterwards. What is a little bit less obvious is avoiding, and fixing, issues that appear when we get our source bleeding into other microphones.
Sound travels at around 1 foot per millisecond (1/1000 second) – sorry metric users, this one just works better in old-world units; a sound will arrive at a distant microphone later than one that’s closer, and this is what causes our phase issues. Past a certain point, we start to hear the sound through the distant microphone as a separate source of its own, as an echo. We’re talking here about phase, but those late signals can also cause issues with timing, a snare drum from a distant room mic will be recorded later than the same snare’s close mic and if the distance is large enough, the two signals can flam (where you hear two close, but not quite in-time hits). Timing issues are generally really obvious and can take some editing skill and judgement to fix – that’s a story for another day. Thank goodness we’re not trying to do this with a razor-blade on 2” tape!
The sound will also change with distance; it will tend to lose top-end and level and become less focussed as it gets reflected from things in the room and mixes in the air with other sounds. As sounds become less similar, the phase relationships between them become less important, and if one sound is much quieter than the other, its effect will again be reduced.
Sound intensity drops (roughly) as the square of distance, so if we have a microphone that’s hearing your cab from 2cm away, and mine from 4m away, my cab is 200 times as far away from your mic, so the sound intensity at the mic position (all other things being equal) is 40,000 times greater from your cab (that’s about 46dB which is a far less impressive number for the same massive difference). Factor in polar patterns that attempt to reject sound from the sides and back of the mic, and that my cab is pointing away from your mic anyway, and we really don’t have to worry about close mics too much unless your cab is really, really, really quiet and mine is really stupidly loud.
The final situation we’ll look at is where we have multiple close mics, perhaps on a drum kit or multiple mics on different speakers in a guitar cab. In this case, the mics are hearing different sources, at similar levels and close distances. This is where careful mic placement can really help; it’s not too bad with two mics on a cab; but getting phase coherent recordings from a set of mics wedged into the gaps around a drum kit can be a bit like wizardry. It often works to choose a “master” source - I’ll often use the snare, and try to phase align as many other mics, especially including overheads and distant mics, to that as I can. Tom tracks can often be gated or muted between hits, but very often we will end-up with snare crack, bass drum boom, hi-hat chikkkks and cymbal splash all over the place. Ultimately, we need to place the microphones where they sound good on their intended source, and then do what we can to reduce phase issues, and we still have options to basic polarity reverse and to slide tracks or use phase-alignment tools in editing/mixing; and if it’s REALLY bad we can just sample-replace the whole kit!
Bear in mind that none of this stuff matters in isolation; it’s ONLY when we mix tracks with different versions of the same sound that we will hear these types of phase issues, and we have, in reality, been hearing them all our lives. We will never get live-recorded music to be perfectly in phase unless we use just one mic in an anechoic chamber, and we should never make the mistake of believing that we need to. Room microphones are always going to end-up out of phase (and time) with something, and we don’t worry about it because we can bring them into a mix at a fairly low level and let the general time and phase smear add a sense of space to a mix – it’s just not a problem. In reality, set the mics to capture their primary sound as you like it, do what you can to reduce bleed to a manageable level, and fix any remaining major timing and phase problems in software afterwards. Make the priority be to capture a great performance, and remember that live sound engineers have to deal with all of this stuff in real time with a live audience that’s close enough to throw things at them.