Hi
Everyone!
I
apologise for being away for so long! As I am awesome human, I have decided to
post up a few of the Q&A emails I have received over the last couple of
months. Hopefully these questions are some that you have had/having at the
moment while being locked up in Quarantine. If you have any questions, please
drop a message over so I can help with any recording/technology issues you’re
having!
Catch you
on the flipside!
“Which omni mics are best for small ensemble
recordings?” – Carl, age 33.
My personal preference would be for the Microtech Gefell M221s. The
Josephson C617SET uses the same capsule, of course, and their electronics are
fractionally quieter, but the Acoustic Pressure Equalising spheres which are
supplied with the Gefell mics give them a significant edge in versatility to my
mind.
The DPA 4060 microphones are astonishingly good for their size and price but are inherently slightly compromised on the self-noise front, again, and have a tendency towards brightness that I don’t think you would appreciate. DPA’s dedicate range, reviewed in last month’s issue, now includes the MMC2006 omni capsule, which essentially contains a back-to-back pair of 4060s internally (with a self-noise advantage). This ‘twin-diaphragm’ technology is presented as a lower cost alternative to the classic MMC4006 capsule, but the MMC2006 is not compatible with the company’s range of APE spheres.
As for other alternatives, I remain a big fan of Sennheiser’s MKH20s, which I think still sound slightly better than the newer MKH8020. I like the ability to switch them from nearfield to diffuse-field equalisation, to suit different applications, and I relish their amazingly low harmonic distortion, ruler-flat frequency response, and very low self-noise.
Does pan placement change if I
place my speakers further apart? – Micheal, age
19.
The maximum
image width is obviously determined by the physical separation of the speakers,
so switching to the 55-inch set moves the outer edges further out, as you’ve
noticed. The whole stereo image has been stretched from the centre outwards in
both directions. Imagine an elastic band, with the centre pinned in the middle
of your sound stage, and the outer edges fixed to the monitors. If you mark the
positions of different sound sources on the band and move the monitors
outwards, the elastic band stretches and so too does the spacing between your
marked sound sources. So, if the saxophone is panned 30 percent left in the
image, then that’s where it will always be. When you switch to the wider
speakers ‘30 percent left’ is actually going to be physically further left than
it was with the closer speakers.
Don’t get your percentages confused with absolute measurements! When
setting your speakers further apart, the placement of a panned source will
inevitably change in degrees/distance, but not in terms of the relative
distance from the centre to the extreme of the stereo panorama.
I’ll assume
you’re listening position is at the apex of an equilateral triangle, with the
other two points being at your 40-inch spaced speakers. Rough trigonometry
calculations suggest that with the closer speakers the sax will appear roughly
10 degrees left of centre. Switch to the second set and this perceived angle
increases to about 14 degrees. But it is still panned 30 percent left within
this wider overall image!
Can I use an effects pedal for
vocals? – Sam, age 21.
There will be people who tell you to track clean and add this sort
of effect to a vocal only while mixing — but it can be both fun and
inspirational to try mangling things while tracking can’t it? Still, they have
a point: the sort of fuzzy distortion a Big Muff Pi can be responsible for is
not something you can undo. For that reason, it makes sense to track a clean
part alongside your distorted one, and there are a few ways of doing this. If
you have a small mixer (or even a large one!) you could simply multi the clean
mic signal out to another track and process that. You could try patching the
pedal in as an insert effect on that channel, and this will work to some
extent, but there’s likely to be both a level and impedance mismatch, which
means the pedal probably won’t operate quite as it would on the instrument
signal for which it’s intended. Whether it’s working, though, is a subjective
matter — use your ears, and if you like what you hear then great! If not, then
you need some way of overcoming those problems.
Using the Big Muff as a send effect might improve things, as you
can change the level going into the pedal using the mic channel’s aux send
control. If that doesn’t work, what you’re looking for is a DI/re-amp box. The
re-amp signal goes to the Big Muff’s input, and its output goes via the DI to a
line input on your mixer or recording device. If you have no mixer, you could
do pretty much the same thing, but beware of latency. You’ll need an interface
with zero-latency monitoring with which the input signal can be routed straight
to an output without passing through the A-D/D-A converters. The incoming mic
signal is routed both to your DAW and to a physical output. That physical out
goes into the Big Muff (the same level/impedance considerations apply) and the
output comes back, either via a DI box to a mic input, or straight into an
instrument input if your interface has one. Alternatively, you could just use
the processed part in your monitor mix and ‘re-amp’ the clean signal through
your pedal later, when you might have a little more control over the tone.
Why would I want to bounce out mixes for referencing? – Mike, age 25.
This is an interesting question that I’ve been asked on a number of
occasions, but I’m not sure I’ve ever written down my answer to it before! I
realise that it’s perfectly possible to compare a mix in progress with commercial
releases using something like Magic AB, Melda MCompare, or Meterplugs
Perception — or indeed just using a multi-channel switcher plug-in within
Reaper, which is my own normal method. However, I do still prefer to bounce out
my mix as a WAV for referencing purposes most of the time, for several reasons
— although not, funnily enough, for the reason you suggested!
On a practical level, I like the flexibility the DAW offers in terms of
editing out and looping the most relevant pieces of each reference track, and
the way it lets me easily adjust the time offset between my mix file and each
reference track, something that I’ve not found as straightforward in the
referencing plug-ins. I also often experiment during referencing to see what
impact loudness processing might have on my mix, but mastering-style processors
can cause CPU or latency-compensation problems when applied to an already
heavily loaded mix project, and I can do without glitches or crashes while
mixing. Besides, anything that encourages people to apply mastering processing
to their mix project is a bit hazardous in my view, because I’ve seen a lot of
people come unstuck that way, effectively trying to master a quick fix to
complex mix problems.
The other psychological advantage of the ‘separate reference project’ approach for me is that it makes me more confident of when the mix is finished. At each referencing iteration, I’ll build up a properly cross-checked list of tweaks I want to do, and then check the effectiveness of those tweaks at the next iteration. Once everything’s crossed off the list, I can feel pretty confident of signing off the mix. If you reference in a less structured ‘hunt and peck’ kind of way, I find it’s a lot trickier to know when you’re actually done.
The last thing to say is that while referencing I prefer to step back mentally from the technical details of a mix and listen more like a typical punter, which is far easier to do when I’m listening to a bounce-out. Because I can’t change anything, my whole mindset changes. Thanks to pure paranoia, I actually do most of my bounce-outs in real time, and I’m constantly amazed at how often I’ll spot some glaring oversight even during the bounce-down itself that I haven’t noticed for the last five hours of mixing, simply because of the change in mental perspective that occurs once I think “now I’m bouncing down the mix”. Also I’m more likely to transport the bounce-down to the car, the office PC, iPod or wherever.
Sure, you could work around all of these issues when using a referencing plug-in on the mix project, but you’ll need a whole lot more self-discipline than I have, frankly! And besides, I think the little breaks you’re forced to have while bouncing things out and switching projects are good for perspective in their own right, but that might be the Luddite in me speaking.