Post Here When You Have a Mixing Revelation

Message Bookmarked
Bookmark Removed

For when you do something and have the thought "why haven't I been doing this all along, clearly I need to do this on every track now!" A repository of questionable, spurious, and possibly ingenious mix tricks.

Jordan s/t (Jordan), Thursday, 23 October 2025 14:51 (two months ago)

My most recent thing was bussing all the guitars and most/all of the drums to parallel distortion tracks. I had done this before on occasion by bouncing something out and reimporting it to a new track, but now I started doing with sends (which I think is effectively identical, just faster?).

The main benefit being that I could go nuts with the saturation and distortion without having to worry about losing all the transients on the main tracks, and then very subtly blend it in to taste. Even when it seems barely audible, the harmonics seem to make everything so much more clear and rich in a mix.

My only nagging worry about this is the gain staging, because aren't I adding a lot of volume with all these doubles of tracks? Then I have to make sure I'm not upsetting the balance of the mix, and also to make sure I'm not falling into "it's louder therefore it sounds better" trap.

Jordan s/t (Jordan), Thursday, 23 October 2025 18:20 (two months ago)

thought this was a cocktail thread

The Luda of Suburbia (Alfred, Lord Sotosyn), Thursday, 23 October 2025 19:01 (two months ago)

Thought this was about meeting people at church

Now read it backwards. (dog latin), Friday, 24 October 2025 04:18 (two months ago)

But seriously, when I used to make more music, I became a bit obsessed with using parallel sends for pretty much every sound, especially EQ. I think Reason, as a DAW, actively encouraged this and it was fantastic, albeit also a quick way to make everything extraordinarily complicated

Now read it backwards. (dog latin), Friday, 24 October 2025 04:20 (two months ago)

I've 100% fallen into that same workflow in Reason, using tons of parallels or triggers to supply anything that's "missing" from an individual track. Bass guitar needs midrange clarity --> distorted parallel filtered down to the frequency that should pop through. Kick has no weight --> trigger nearly inaudible low-end sample. Instrument wants interesting effect --> set up entirely in parallel chains and EQ/compress/image/automate independently. Need thickness or stereo width --> parallels with different treatments L/R or in separate frequency bands. Snare tails, room sounds, echo effects, instruments like Hammond organs where the ranges can behave very differently — it's tempting to carve every last thing out into a billion separate elements to handle them more precisely.

For gain staging and headroom I think it's potentially better, because you can carve unneeded energy out of each element (like not having excess low end building up in reverbs or whatever). I guess it does make the number of tracks and busses you're looking at complicated, but it feels much tidier and more managebale to me, in part because stuff is clearly labeled: the thing giving the bass its 1k presence isn't a dial on an insert buried deep in the rack, it's a big-ass slider that says BASS BUZZ.

That said, some of the next things I'm trying to get a better grip on are dynamic EQ, mid-side EQ, and multiband compression — all things that should make this kind of carving-up less necessary. There are definitely times when I'm just splitting a part into different frequency components to handle separately, when that can already be done by a single multiband effect.

ን (nabisco), Friday, 24 October 2025 13:49 (two months ago)

I'm in a similar place. I've been using Trackspacer a ton for dynamic EQ, although I just now learned that I've got a native plugin (in Cubase) that can do it. Love it but it can be a bit hard to keep track of after the fact.

Have done some mid-side stuff, mostly to settle an argument between two tracks/instruments, but don't have an overall philosophy for it and it's not the first thing I think about. I know at least one pro engineer who rejects it (on the basis that he has spent his whole career thinking about eq 'vertically' and not 'horizontally', and the classics weren't made with mid/side EQ).

Multi-band compression is another weak point, I've only pulled it out as a corrective (say in a sampled guitar or synth arpeggio when only certain notes need to squashed a bit). But I wonder how often great engineers are using it as a first stop to shape things? I generally just EQ first and then compress, although I've had plenty of psychotic chains (just one more FabFilter instance to get one more little frequency cut!). Analog Obsession's Comper has been my go-to for awhile now.

Jordan s/t (Jordan), Friday, 24 October 2025 16:35 (two months ago)

Transients are a big focus for me right now, which is kind of embarrassing as a drummer. I was so obsessed with the sound of saturation and compression that I sort of didn't realize I was softening all my transients way too much, in a way that doesn't bother me when I'm mixing but really stands out when I listen back or compare to other music. I do have some transient shapers that can act as an easy fix, but I'm trying to get better about getting it right through compression settings and the parallel stuff.

Jordan s/t (Jordan), Friday, 24 October 2025 16:37 (two months ago)

Oh, that reminds me of one real "revelation" I had: that you can sometimes separate parts just by nudging something forward or backward so the transients don't hit at the same time. I thought that would only help for programmed parts that were genuinely triggering on the exact same beat, but I feel like it's worked with some live performances as well? (Particularly when arranging reverb/predelay to make it feel like one sound is just reaching you from slightly further away.) Works especially well if you match up compressor attack on the first thing with the lag on the second, like maybe a hat gets X milliseconds of transient and then squashes down right in time to let a shaker come through.

ን (nabisco), Friday, 24 October 2025 17:15 (two months ago)

three weeks pass...

This isn't a sudden "revelation," but ... it recently sunk in for me how muted guitars used to be in the high-mids and highs. Like yes, sure, today's production norms tend to have stuff so painfully up-front and blaring through those frequencies (with limiting and sidechaining to stuff every last gap) that most anything will seem muted in comparison. But I keep going over different 90s and 80s mixes, and I'm still surprised by how many guitar parts were totally rolled off up there, way deeper than I'd have guessed, to leave space for vocals and air. Rhythm parts especially, all these comfy jangles and grinds with a sound that you might describe, in isolation, as "unusably muffled" or "why is there a mile-thick blanket over the amp" — not just dipped in the high mids but shelved or even low-passed to some point well beneath that. And of course they all sound fine and great, because they're in mixes with enough sense of space or depth to let them sit comfortably in the background. (Or those sorts of 80s mixes where the vocals and snare get the whole spotlight to swim around in reverb while everything else is screened back.)

Maybe it's just my own archaic taste, but this made me test some stuff — recording guitars with purposefully minimal presence, mixing to see how attenuated I could get them without losing clarity, trying to push every part as far away as possible and not bring anything forward — and it not only sounded decent but also felt much easier to work with. I guess the two obvious boneheaded revelations here would be: (1) I'm still falling into that typical trap of trying to make every part feel solid/present/rich, instead of having a plan for how they all suggest a larger space, and (2) I need to periodically remind myself that half of my favorite music sounds washed-out, thin, dated, weird, amateurish, or even aggressively shitty, so trying to capture and mix sounds with hi-fi realism might not always be the most productive use of my time.

ን (nabisco), Wednesday, 19 November 2025 22:49 (one month ago)

That's really interesting, thanks for posting

Blues Guitar Solo Heatmap (Free Download) (upper mississippi sh@kedown), Saturday, 22 November 2025 19:26 (one month ago)

Same is true for synths and organs, tbh! Hammonds in particular can create an incredible “halo” sound, all this extreme 3K+ frequency info.

I’m mixing an album this month and I have two go-to methods of applying compressors (divided by frequency range, or a more intuitive element-by-element approach).

Years ago I created a front-end 4xVCA for an effects send— usually a reverb— in the hopes of applying it for more unstable, active usage of an effect. I called it a Paramatrix (named after the song it was being used on) and now I’m talking with an engineer friend about prototyping it in guitar pedal form.

Basically it takes a stereo signal and cross fades left to right and right to left with wide, unsyncronized LFOs to create this slow bubbling effect, sudden waves of reverb coming crashing down on hard panned channels. It sounds great— when you’re using Eventide reverbs, which I have come to realize are uniquely dual-mono. Applying the Paramatrix to most other reverbs just results in “a bit of panning” instead of the Moses-parting-the-Red-Sea sound that I’ve been getting with the Eventides

by the clicking of her thumbs, something canine (flamboyant goon tie included), Saturday, 22 November 2025 19:57 (one month ago)

My digital matrix mixer setup that I finally got working a couple weeks ago is also in the shop to be packaged for live usage. I was feeling a little nervous about making the investment (it’s expensive) but Damien Taylor looked at it and the Max/MSP patch I got controlling it and said that I should think of it as my “Vangelis organ”— a unique instrument of my own design— rather than how I was feeling about it (a 40-something buying a BMW in the hopes of attracting all the ladies all the specialized audio gigs)

by the clicking of her thumbs, something canine (flamboyant goon tie included), Saturday, 22 November 2025 20:01 (one month ago)

Okay, there's a good chance y'all were already doing this and I'm the only idiot for whom it's a revelation, but ... are you constantly flipping the polarity of different tracks just to see what happens?

I only ever thought to do this when there were multiple mics on the same source, to check for phase cancelation. But the other night I was working with a DI bass part, which I'd sent to a parallel channel so I could handle the low end and midrange separately. It would never have occurred to me to flip the polarity of a parallel channel; I suppose I just thought, you know, it's literally the same waveform, inverting it should cause cancelation, all things being equal they'd cancel out to silence. Somehow I never thought about how a little processing would change that relationship. I hit the invert button by accident and the bass suddenly snapped together, sounding infinitely better than either part had before. I was especially surprised that the biggest improvement was in the low end, which I'd completely filtered out of the channel being flipped; I assume this was either psychoacoustic or had to do with some kind of phase shift the EQ was introducing? (Disclosure: I very much do not understand the phase-math of how EQs work.)

Since then I've been periodically toggling polarity on any parts that are similar to one another, just to see what's up. No results have been quite as shocking as that bass, but way more stuff than I'd have expected benefitted. Parallels in particular (makes sense, identical signals are the obvious issue here), but also parts that are just similar, like double-tracked guitars or stacks of vocals. Is this another instance of me belatedly realizing a very basic thing?

ን (nabisco), Tuesday, 2 December 2025 00:34 (three weeks ago)

Wow, I've never flipped it for a parallel either, weird!

Jordan s/t (Jordan), Tuesday, 2 December 2025 01:01 (three weeks ago)

I flip them all the time, yep. Also, I am always switching my mixes (and elements of them) from stereo to mono to see what things I can learn

by the clicking of her thumbs, something canine (flamboyant goon tie included), Tuesday, 2 December 2025 02:32 (three weeks ago)

Yup, okay, I checked and this applies to well more than half of all basslines where I put saturation or distortion on a high-passed parallel channel. Inverting changes them from two distinct layers into one big fat hairy sound. Same with a few distorted parallel vocals I used for midrange presence. Seems so obvious now: keep checking the polarity of parallels as you process them.

My main mono/stereo switching thing is done less for the quality of the mix and more for the fun of it — I like mixing everything as completely as I can manage in mono, then doing all the panning in one delightful sweep that suddenly makes everything easier and better. I guess the downside is that you may then have to rethink a few decisions you made. But the huge upside, for me, is that while in strict mono you can toss a few elements off to one side just to hear them more clearly, like a replacement for dimming or soloing — that’s just immensely helpful for an amateur who can’t always hear, subtle moves in a crowded mix, and might be tempted to do too much work in solo. If two parts seem to be fighting they get a time out over on the left speaker while I sort out the problem, then back to the center while I spotlight something else.

ን (nabisco), Tuesday, 2 December 2025 16:29 (three weeks ago)

Wow, that's a solid revelation. Now I'm having that chilling feeling, wondering whether I should go back to all the sessions for an album that I've already made premasters for and try the phase on the sends/parallels. I used a ton of fx sends, but they're generally pretty quiet in the mix (like a hugely distorted/crushed version that sounds crazy on its own, but just adds a little grit and harmonics in the full mix).

I do check in mono at least, in the DAW and by using a mono Bluetooth speaker (one that has a bit of a weird frequency curve - if I can hear the bass and it sounds balanced and generally like music on the Anker, then it's a good mix). It's usually fine, I'm generally panning for taste rather than to solve conflicts these days.

And I've started using auto-pan again for certain background things, after swearing it off after abusing it early on. Generally pretty slow and not extremely wide, so it's not as obvious and distracting.

Jordan s/t (Jordan), Tuesday, 2 December 2025 17:59 (three weeks ago)

Speaking of auto-pan, I'm sure there's a plugin that does this, but what I wish for sometimes is one that will go from hard R to hard L (towards center), instead of coming back towards center once it hits the limit of its pan. Does that make sense?

Jordan s/t (Jordan), Tuesday, 2 December 2025 18:01 (three weeks ago)

Not yet, it doesn’t. Where does the sound go when it hits hard L?

by the clicking of her thumbs, something canine (flamboyant goon tie included), Tuesday, 2 December 2025 18:17 (three weeks ago)

Hard R! Like, it would keep either moving continuously from L to R, or you could change it from L to R. And it would jump to the other side when it hits 100.

Jordan s/t (Jordan), Tuesday, 2 December 2025 18:21 (three weeks ago)

(that should be "change it to R to L" obv)

Jordan s/t (Jordan), Tuesday, 2 December 2025 18:28 (three weeks ago)

Imagine that your sound is basically a mono signal, and it’s “moving” from one channel to another only because of the amount of gain you’re sending to each speaker.

Hard L = 0 dB / -oo dB
Centre = -3 dB / -3 dB
Hard R = -oo dB / 0 dB

The only way for you to have the sound “reenter” from the left side again is if it either fades out entirely first on the right side…

…or if the sound is modified in some way on the left side, but it will actually have to be “a new sound”. If it’s the same sound (or close to it, like out of phase or modified in some way) it will just sound like ping-pong panning.

by the clicking of her thumbs, something canine (flamboyant goon tie included), Tuesday, 2 December 2025 18:32 (three weeks ago)

Oh wait, I see, you want it to JUMP. Like a ramp. I think normal panners can do that, no? Pretty sure the Logic plugin has that option

by the clicking of her thumbs, something canine (flamboyant goon tie included), Tuesday, 2 December 2025 18:33 (three weeks ago)

I have low knowledge here due to working in Reason, which is built around making that sort of thing simple to set up as modular/CV stuff. But I know there are various LFO-generator VSTs that let you draw and output your own curves — you could draw one in as a straight ramp from one side to the other and then link that to panning. Just googled and a couple of the VSTs that came up have panning included, along with some other basics. If you already have any synth VSTs, there’s also a decent chance one of them can pass through audio and has some kind of system where you could draw in that modulation, or even just use a very slow ramping sawtooth wave to modulate the pan from zero to max and then straight back to zero.

ን (nabisco), Tuesday, 2 December 2025 18:54 (three weeks ago)

Ha, sorry, xposted before seeing there are stock options! (This is the main area where I have no idea how it looks in other DAWs; the main good thing you can say about Reason is that if you want to make any one thing modulate any other thing, it’s right there)

ን (nabisco), Tuesday, 2 December 2025 18:59 (three weeks ago)

Yes, a ramp! I think I could do this now that they've added Ableton-style modulators to Cubase. I should try using those for pan automation rather than pulling up PanMan every time.

Jordan s/t (Jordan), Tuesday, 2 December 2025 19:02 (three weeks ago)

My latest thing I'm hemming and hawing about (and really just need to experiment with) is this: when recording drums (or even a drum) with multiple mics at multiple distances, should I bother time-aligning the transients? Theoretically this would result in a tighter/punchier sound, but my gut says 1) it seems like one of those things that people do just because they can, now that we can see everything visually in great detail, and 2) it probably wasn't done when everyone was working on tape, and maybe results in greater realism/depth (the sound actually hitting the mics later when they're farther away)?

Jordan s/t (Jordan), Tuesday, 2 December 2025 19:06 (three weeks ago)

Sorry, this is the opposite of a revelation.

Jordan s/t (Jordan), Tuesday, 2 December 2025 19:06 (three weeks ago)

I have no idea and can’t help you. My “drum hack” is always hiring the same engineer for any drum sessions so I know things don’t need any finessing once recorded, just levels and compression and effects

by the clicking of her thumbs, something canine (flamboyant goon tie included), Tuesday, 2 December 2025 19:29 (three weeks ago)

this thread stresses me out

na (NA), Tuesday, 2 December 2025 20:22 (three weeks ago)

J, I think the pre-digital version of that was just putting on headphones and spending hours painstakingly nudging a mic until the sound cohered. (There’s actually a polarity hack for this! Invert one mic and move it until the sound gets really weak, which is easier to hear — then flip it back.)

I think mics would need to be pretty far apart (or pretty different types) to get spatial realism instead of potential phase interference. But I don’t think you need to go zooming in and nudging things into sample-perfect alignment to check? If you have a simple delay that lets you do tiny time values, you could throw it on 100% wet with no repeats and click through increments to see if little millisecond nudges are making a difference. (I think sound travels about a foot in a millisecond, so that’s not super-precise. But I also don’t know how precise you can reasonably be when zooming in to align peaks, and the delay is much easier to play with in search of any difference.)

As for whether that’s worth tons of attention, I have no idea, most anything I touch will have glaring problems much bigger than this!

ን (nabisco), Tuesday, 2 December 2025 20:55 (three weeks ago)

Oh wait, if by "different distances" you mean purposeful near and far mics that are, like, feet apart ... I'd have thought space was the idea and you wouldn't want them aligned, but now I'm curious what would happen if you did! (I'm picturing like a very short, insanely natural room reverb with no predelay, making it somehow stronger but also more distant?) Might play with this tonight. But yeah, that's even better for the "cut a few milliseconds, delay to compensate, slide back and forth to taste" experiment

ን (nabisco), Tuesday, 2 December 2025 21:16 (three weeks ago)

Good idea!

NA, I try to keep in mind that this stuff only matters insofar as communicating the feeling of the music. Like, if I had actual songs, a lot of little things might not matter as much (although I'm sure I would still obsess over them)? But personally I had to accept that for instrumental music, yeah, the mix *does* matter a lot in terms of the music working or not. If the whole point is to feel like a warm bath or to hit on a soundsystem, I've had to take responsibility for some technical things that I wouldn't care about for their own sake.

But then you watch tutorials where people are polishing the worst music in the world, and clearly the perfect mix is meaningless in that case.

Jordan s/t (Jordan), Tuesday, 2 December 2025 22:02 (three weeks ago)

Oh man. I know exactly what you mean. It messes me up in the opposite direction! I used to work on things from the bottom up, trying to make the instrumental sound rich and spacious on its own, getting in the weeds with sound-design stuff — and by the time I got to vocals they wouldn’t fit. (Or, with my own material, I’d keep trying to nail the backing and never even get around to recording the melody.) So now I start with vocals, and yeah, sometimes it is perfectly fine if the rest sounds like an 8th-generation cassette dub of itself, if that’s what lets the song through.

I used to have so much fun, though, working on electronic stuff where I could fiddle forever with the tactile feel in the way you’re describing. It’d be amazing to be good enough at this to expect both — not just knowing how to get a song across clearly but giving it all that goosebumpy detail too. Will settle for figuring out the former, though.

ን (nabisco), Tuesday, 2 December 2025 23:54 (three weeks ago)

that makes sense. i also came up on a lot of music that sounds "bad" from a technical perspective so usually i don't worry about it too much. but it does make me feel like i should try getting someone else to mix one of our songs and hear what it sounds like. if anyone wants to give it a shot as an exercise, i have a bunch of songs by my band that are technically still in the mixing process.

na (NA), Wednesday, 3 December 2025 20:09 (three weeks ago)

I'd give it a shot for fun :)

Jordan s/t (Jordan), Wednesday, 3 December 2025 20:15 (three weeks ago)

emailed u

na (NA), Wednesday, 3 December 2025 20:23 (three weeks ago)


You must be logged in to post. Please either login here, or if you are not registered, you may register here.