When working with the endless options of vocal effects in today’s average digital audio workstation (DAW), it can be very tempting to go overboard. It’s like having a huge, free buffet in front of you — of course you’re going to want some of everything. But that doesn’t mean you need to put chocolate on pizza or eat four plates until you get sick.
Several artists get away with large swaths of effects on their vocals. Look at Radiohead, for example. Their seminal album, Kid A, opens with the song “Everything In Its Right Place,” in which singer Thom Yorke’s voice is reversed, looped, pitched up and down, and drenched in a variety of distorting effects.
However, above all of those vocal FX hide Yorke’s clear human, emotive singing voice. So, when experimenting with effects like Radiohead, be on the lookout for these five signs that your vocals have too much processing.
1. The vocal mix is muddy
One of the first signs to look for when using heavy effects on your vocals is how those effects are balancing in the overall mix of your track. If you’re listening back on speakers and things sound muddy, or if frequencies are clashing too much, check your effects rack. Turn off things like reverb, delay, distortion, etc. one by one and test the mix.
If it sounds cleaner without certain effects, consider taking them off your vocals, toning them down, or figuring out why they’re making the mix muddy. For example, if it’s the reverb, try EQing your reverb so that only the frequencies you want are being heard. If it’s distortion, try softening the tone of the distortion so it’s less abrasive, etc.
2. You can’t understand the lyrics
Since we already referenced Radiohead, let’s use them as an example again. Thom Yorke is a beloved singer, but fans know it’s not always easy to understand what he’s saying, even when his voice is bare without effects. So, before we make our next point, we acknowledge that not being able to understand lyrics can be because of singing style — not necessarily too much vocal processing.
On the other hand, let’s assume you have pretty good diction and that you’re a clear singer. If other people who don’t know the lyrics have to ask what you’re saying, that isn’t a very good sign. As much as effects like delay might make your falsetto sound ethereal and pretty, they can also make what you’re actually saying hard to decipher.
3. The emotion is lacking
One of the reasons people flock to singers like Adele or Christina Aguilera (whether you’re a fan or not, let’s examine them for “academic” purposes) is because of the emotion in their voices. You can hear what they’re going through in their vibrato, their vocal fry, their screams, their soft moments, etc. If your voice is buried under an ocean of effects, those subtleties will get lost, leaving the listener feeling like they can’t connect with what you’re singing about.
Think about it — have you ever heard a song in a language you don’t speak, but you could feel what that vocalist was singing? It happens all the time, which is why we call music the universal language. Before you go crazy with the vocal effects, make sure the emotion you want to capture is still present.
4. The vocals blend too much with other instruments
Bands like Sigur Rós have several songs where singer Jónsi more-or-less uses his voice as another instrument. He’s not singing any lyrics; rather, he’s just singing wordless falsettos, and he’s doing so usually under an ocean of effects. If that’s your goal, then carry on!
However, for most artists — from pop to funk to indie rock — you want your vocals to be front and center. When you become too friendly digitally processing your vocals, they can get so blurred in the mix that it’s not always clear to the listener whether they’re hearing a singer or just some random vocal sample that isn’t meant to be heard prominently. If you can’t clearly tell between a shoegazey guitar and a reverb-blurred falsetto, you might want to tone back the vocal processing.
5. They don’t feel human
Many popular artists use vocoders, Auto-Tune, and several robotic effects on their vocals. But even if they sound a little bit like a cyborg, they still feel human. Of course, the feeling we get from music is subjective, but when Kanye West screams at the end of “Blood on the Leaves,” you can tell it’s a human screaming — and really putting everything into it, even if it sort of sounds like Auto-Tuned gargling.
So, unless you’re purposefully trying to make yourself sound and feel like a robot, be sure that even with various vocal effects, there’s enough room for your human voice to come through — whether that be the strain in your voice, your dialect, or your accent. Anyone can have the computer speak out words (like Radiohead popularized on their song “Fitter Happier”), but only you (and your own voice) can sound like you. Don’t let the temptation to become a full-on robot get rid of your own unique feeling and energy.
Learn about more modern mixing techniques (like EQ, compression, levels, pan setting, digital signal processing, FX sends, and so much more) from some of today’s leading sound engineers, and get your new record sounding crisp! Preview Soundfly’s newest and most in-depth mentorship-assisted online mixing courses, Faders Up I & II: Modern Mix Techniques and Advanced Mix Techniques, for free today!
Sign up straight from Flypaper, and get $100 off with code: FLYPAPERSENTME.
Sam Friedman is a Brooklyn-based electronic producer and singer-songwriter, creating under the moniker Nerve Leak. Praised by major publications such as The Fader and Bullett Magazine, his unique blend of experimental and pop music has earned him hundreds of thousands of streams across the web. An interdisciplinary creative, he also works as a journalist for music publications such as Sonicbids, ReverbNation, Samplified, and Unrecorded.