Nathan Lively's Blog, page 12

October 17, 2019

Console Prep: Simplicity is Key (FREE lesson from Guerrilla Mixing on Digital Consoles)

Learn more about Guerrilla Mixing on Digital Consoles.

Quick notes:

Start with system output setup, so you can later focus on inputOnly setup inputs that you need + 1 (minimal with spare lead vocal)Use shortest signal path from input to output (if building from scratch) – get the signal to the listenerMake sure you have all the channels you need (consider recording, intro, playback, communication, pink noise, DCA setup, etc)1:1 patch is usually simplest and most logical (avoid complexity!)Convenient patch is sometimes dictated by the console (only even and odd channels can be linked for stereo, fader bank layout)Have DCAs ready for controlling more channels

This article Console Prep: Simplicity is Key (FREE lesson from Guerrilla Mixing on Digital Consoles) appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:You might be guerrilla mixing, if… Choosing an online mixing course: Essential Live Sound Training vs Guerrilla Mixing Guerrilla Mixing
 •  0 comments  •  flag
Share on Twitter
Published on October 17, 2019 06:00

October 16, 2019

October 4, 2019

Should audience depth influence crossover frequency between main and sub?

Hypothesis: By choosing a lower crossover frequency I can expand the coupling zone between main and sub.

Conclusion: While lowering the crossover frequency does expand the coupling zone between main and sub and this fact may influence the system design, its advantages are secondary to the efficient functionality and cooperation of both drivers.

Coupling zone: The summation zone where the combination of signals is additive only. Phase offset must be

Bob McCarthy, Sound Systems: Design and Optimization

While working on a recent article about crossover slopes I started thinking about main+sub alignment and its expiration. If we know that ⅔ of the phase wheel gives us summation and ⅓ of it gives us cancellation and we know the point in space where the two sources are aligned, then we should be able to predict the expiration date of the alignment, compare it to the audience plane, and consider whether lowering the area of interaction will benefit coverage.

If two sources are aligned at 100Hz and the wavelength of 100Hz is 11.3ft, then a 3.8ft distance offset will create a ⅓ λ (wavelength) phase shift (120º). If we have two sources at opposite ends of a room and they are aligned in the center, then we have a 7.6ft coupling zone. From one edge of the coupling zone to the other is ⅔ λ (240º).

[image error]

80Hz has a λ of 14.13ft and would give us a coupling zone of 9.4ft, an expansion of 1.8ft.

[image error]Lowering the crossover frequency to expand the coupling zone

Here’s a section view of a main+sub alignment where you can clearly see a cancellation at 24ft. The coupling zone is 29ft, which is 65% of the audience plane.

[image error]

I can lower the crossover frequency and expand the coupling zone by 4ft, which is 71% of the audience plane.

[image error]

This process can be sped up using Merlijn van Veen’s Sub Align calculator. Here’s the same system design observing the relative level difference at 100Hz.

[image error]

And here it is at 80Hz. Notice that the checkered pattern indicating the coupling zone has expanded.

[image error]

Instead of putting every design through every potential crossover frequency, I made a new calculator that shows the percentage of audience within the coupling zone by frequency.

[image error]

I am now able to quickly compare the potential benefit of selecting one crossover frequency over another by how much the coupling zone will expand or contract. Using the example from above we can see that changing the crossover frequency from 100Hz to 80Hz only provides a 7% improvement. This doesn’t seem significant enough to make a system design decision, but it could be included in other factors in the decision making processes.

[image error]

Let’s look at another example. In this case the vertical distance offset is reduced and the audience depth is increased.

[image error]

The calculator reveals that a 120Hz crossover would include 58% of the audience in the coupling zone, but a 75Hz crossover gives us a 13% improvement.

[image error]Should I use this calculator to pick my crossover frequency?

No. When it comes to choosing a crossover frequency there are other more important factors to consider like mechanical and electrical limitations. If your design only puts a small portion of the audience in the coupling zone, changing the crossover frequency is not going to save you.

Instead, start by observing the manufacturer’s recommendations, then the native response of each speaker, and the content of the show and its power requirements over frequency.

All that being said, knowing more about the expected performance of a sound system is powerful. I might make design changes based on the calculator’s predictions. I might I do nothing. Either wa,y I walk into the room with fewer surprises during the listening and optimization steps.

If lowering the crossover frequency increases the coupling zone, why not just always make it as low as possible?

I don’t have a great answer for this question. As I mentioned already, there are limitations to how low you can go. One major tradeoff is that your main speaker will need to handle more and more power as the crossover frequency lowers, making it less efficient.

One clear benefit I can see is estimating the viability of an overlap crossover. If you are planning a system with an overlap crossover that goes all the way up to 120Hz and you look at the calculator and see that 120Hz will only be coupling through 50% of the audience, you might decide on a unity crossover to limit the main+sub interaction into those higher frequencies, making it more stable over distance.

What about aligning at 3/4 depth?

Right! I included a phase offset option to test this and it makes a big difference. In the most recent example, if I use a ⅓ λ offset (120º), the portion of the audience in the coupling zone goes up to 88%.

This article Should audience depth influence crossover frequency between main and sub? appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:Smaart Beta: Will the new filter control in the delay finder help with your main+sub alignment? Spot Crossover Slopes in Smaart and Avoid Falling Sharply 3 Simple Ways to Phase Align Main+Sub with SATlive
 •  0 comments  •  flag
Share on Twitter
Published on October 04, 2019 07:32

October 2, 2019

MYTH: FIR filters always add a lot of delay and are impractical for live sound

FACT: FIR filters have an arbitrary amount of delay based on their design, making their application flexible and practical.

In this interview with Michael John from Eclipse Audio we dispel FIR filter myths and discuss what live sound engineers need to know about their application in the field.

Nathan

So, I definitely want to talk to you about FIR Designer and filters and stuff, but first: when you are doing listening tests, what’s one of your go-to reference tracks?

Michael

I tend to use a lot of Sting and maybe even some Dire Straits, because the production quality on those tracks, especially on some of the older Sting albums, is just exceptional. Maybe the more recent Sting albums have become more highly compressed but if you go back a decade or two… And even “On Every Street,” which I think was the last major Dire Straits release, is just phenomenally mixed and some brilliant musicians there as well: tonally quite broad, dynamically very good, and the production quality is just top notch. It may not be quite as critical as solo piano pieces or solo voice, but they’re my go-tos.

Nathan

How did you get your first job in audio?

Michael

I’ve always played with electronics and audio gear since teenage years; particular assembling small PA systems, running sound at school events, and the like. That kind of led me through a path to do Electrical Engineering at university, and my first audio research job was straight from that. In the 3rd and 4th years at university, advanced signal processing was offered. I found that fascinating and gravitated towards that. That then led to audio research jobs here and in the USA. It felt like a foregone conclusion—starting with a lot of live sound interest and then going through electrical engineering.

I’ve always found live and pro audio a lot of fun. I remember even during school years I was encouraged a lot by my father to do many work-experience stints. I worked a few weeks at Australian Monitor—who used to make some really leading-edge high-power power amplifiers—and other places like Trackdown Studios, a tracking studio here in Sydney. (They later moved to Fox Studios which is now one of the big film production areas here in Australia, where they have space to record orchestras and other audio for film.) So audio, particularly live stuff, has always been in my blood and Electrical Engineering helped fulfil the signal processing / maths side.

Nathan

Nice. So, having started out working on shows, I’m curious if maybe the genesis of your software can be connected back there? Were there times when you were listening and you were like, “Well the speaker sounds really bad. I wish I could do something about it.” And later on you created software to do something about it?

Michael

Sometime in the last 10-15 years I started making a number of cabinets myself, particularly high-power wedges. I wanted to use DSP for those rather than make passive designs. The DSP amps (most of them) had only IIR based filtering, but I could see user-loading FIR was coming in pro; in fully programmable DSP’s (like the SigmaStudio Analog Devices parts), but even in the pro amplifiers and plate modules. But there were really no tools for flexibly designing FIR filters and loading them into those amplifiers. From my signal processing and programming background, I thought, “It’s actually quite, I wouldn’t say easy, but I’m quite capable of building GUI software to do this.” So that set me on a path of developing my own tools which I started using in some of my own small designs. I didn’t sell anything but I was making some nice wedges and some modest size FOH cabinets. Through conversations with a variety of people, including Bennett Prescott, we started to realise there’s something in the software, beyond my own personal use; that maybe we could provide tools to the broader audio community.

Also, a lot of loudspeaker tools and many DSP programming environments involve setting up filters, making a measurement, seeing if the result is as intended, and then iterating. I didn’t see why the work process needed to be that way. I felt you could load a measurement, simulate and apply all the filtering you want, and confidently see what response you’re going to get if you actually put the filters back into the speaker processor. So that became our workflow.

Also, I was really big on wanting the user to be able to immediately see the effect of the changes they’re making.

Nathan

Right. And when you say real-time you mean see the results in the graph.

Michael

Yes, absolutely. And again, many tools don’t do that. You press “calculate” and wait a few seconds, then copy the settings over somewhere else and run a measurement. I intentionally designed the workflow and the compute engine in the software to be able to give the user immediate results. It’s the same for everything in the workflow, including things like the wavelet view.

Which leads me to something else we do a little differently in our tools. Again, rather than setting up processing, measuring, and iterating, we can actually put a target response into the software and actually force each loudspeaker driver towards the intended target. And from the way the targets sum together we know exactly how the (processed) loudspeaker drivers will sum together.

Maybe other software products do this, but we had not seen this before.

Nathan

That’s great. So it sounds like, where maybe people in the past were looking at more of an isolated, like, “Let’s see what each of these filters do electronically,” you started thinking of maybe a more holistic approach, like, “Let’s see what they do together and what’s going to be the acoustic result?”

Michael

Yes, and so the workflow involves loading—particularly in say “FIR Designer M,” although it can be done in “FIR Creator” as well—measurements for each driver and then emulating the processing and looking at the sum. Also, we can take a bunch of measurements for a driver and average them down to a single response before running that through the workflow. “FIR Creator” doesn’t have averaging, but we have a separate “Averager” tool. “FIR Designer” and “FIR Designer M” both have integrated averaging capability.

Some people have asked us to simulate the processing on other measurements in addition to the main measurement and display the responses. That’s something we’re looking to add. This could be used to see the effect of processing on, for example, off-axis measurements. Or we could even show a whole balloon plot and show the effect the crossover and processing have on the overall radiation pattern of the cabinet.

Nathan

I’m interested in this topic because I’ve been showing this game that I’m working on called Phase Invaders to people and one person said, “Hey, if we’re doing this alignment and I tell you about the audience plane that I’m doing the alignment for, you should be able to tell me the expiration date of that alignment,” and I was like, “Yeah, sure. That’s actually a good idea.”

Michael

Yes. I haven’t thought about extending that to the audience plane, but certainly, based on our current workflow we could eventually show that in terms of radiation pattern, which eventually would translate to the audience plane.

Nathan

Looking back on you career so far, what do you think is one of the best decisions you made to get more of the work that you really love?

Michael

I think the decision, and again it was somewhat of a foregone conclusion, to do Electrical Engineering. I have some friends who have gone down the live sound production path rather than going to university first, and they’re doing really well and they’re loving what they do. But for me personally, going to university while indulging my passion for live sound on the side, has helped my understanding of loudspeaker processing, ultimately resulting in the software we have today.

Nathan

Some follow-up questions: first of all, you don’t have any merch on your website yet, but if you did I have a suggestion. A t-shirt with Ned Stark with a thought bubble that says, “FIR is coming.”

[image error]

Michael

Interesting, I like that.

Nathan

Ok, consider it. Number two, you mentioned several names of software but a lot of people reading this interview have no idea who you or Eclipse Audio are, so could you just go through each of the pieces of software?

Michael

Sure. Firstly, there’s “FIR Designer M,” our flagship product for integrated loudspeaker processing design. It enables the design and simulation for up to 6-way loudspeaker processing. It can inherently show the combined response of all the channels, and it’s our largest and most comprehensive tool.

“FIR Designer” does everything “FIR Designer M” does but for a single channel. With “FIR Designer” it’s possible to do multi-way designs; it just requires multiple projects; one project for each output channel. I think we see “FIR Designer” possibly being used more for creating filters for broad cabinet EQ or for installations.

Both products have unlimited filter capabilities and unlimited auto magnitude and auto phase bands.

“FIR Creator EX” and “FIR Creator” are similar to “FIR Designer” but with some feature limits, including limits on the number of filters and auto mag and phase bands. They’re designed to be more cost effective, including for hobbyists who may want to experiment with FIR and mixed IIR+FIR processing. Also, in “FIR Creator EX” we’ve provided professional export options: output formats for pro processors—such as DFM & Linea Research—whereas “FIR Creator” exports filters only as TXT & WAV files and some broad open formats. So, in “FIR Creator EX” we provide some of the pro capabilities of our flagship products but at a reduced price.

I should also mention “Averager.” We have measurement averaging within “FIR Designer” and “FIR Designer M.” But we also provide averaging as a separate, cheaper tool. It provides four or five different averaging modes. This is maybe more for hobbyist folks who wish to make a bunch of measurements in their space and distill them down to one measurement that they might want to use in some other tools, or in our tools; we don’t mind.

Nathan

You use these two acronyms: IIR and FIR.

Michael

Initially, I would point people to a paper we put on the website, About FIR Filtering, which provides a moderately technical perspective on FIR filters and their uses, as well as IIR filters.

In short, a FIR filter has a finite or limited time length impulse response whereas an IIR impulse response can go on “infinitely.” How does the IIR filter do that? I’ll defer to the paper on our website. The longer the filter impulse response, the more effective the filter is at EQ’ing lower frequencies. Because a FIR filter has a fixed length, a FIR filter has a fundamental limit as to how low in frequency it can start to affect magnitude and phase changes.

An IIR filter is much more efficient at going lower in frequency because of its infinite length. However, with IIR filtering, fine grain control isn’t very easy, and doesn’t have independent control of magnitude and phase. On the other hand, FIR filtering can do some fairly fine grain EQ and has fully independent control of magnitude and phase. That’s the 30 second answer.

Nathan

What are some myths that you would like to dispel about FIR filters?

Michael

FIR’s have often, I think, been associated with linear-phase processing—e.g. linear-phase brick-wall crossover filters. Linear-phase filters are inherently symmetric in their coefficients and so the delay through the filter is half the length. And so often people think, “I’m going to have all this high latency / delay and that’s no good.”

In reality a FIR filter can be anything. It can have minimum-phase behaviour, maximum-phase behaviour, linear-phase behaviour. It can have a variety of multi-phase or, what I call, arbitrary-phase behaviour. And so, the delay through the filter is arbitrary. It’s really how you design the filter. The delay is not limited to the middle point of the tap length.

Also, people often associate FIR filters with things like horn correction. That’s definitely one use, but there are many more uses and we talk about them in the paper.

Nathan

Can you compare linear-phase, minimum-phase, and maximum-phase?

Michael

A minimum-phase filter imparts an EQ profile with the least amount of delay on the signal. IIR filters are, for the most part, minimum-phase and that includes any IIR processing in a speaker processor. By the way, you can measure IIR filtering from a processor (sampling the processing as a FIR filter) and then achieve exactly the same filtering. The measurement is a minimum-phase FIR filter.

A linear-phase filter can impart a particular EQ profile but without any change in delay across frequency. The delay is constant across the whole frequency spectrum.

Maximum-phase is a term not used very often because it’s really something that’s born out of FIR filtering where its length is finite. Imagine you have a minimum-phase filter and you literally time-reverse the impulse response. Rather than every frequency point having the minimum amount of delay added to it as part of the EQ, every frequency point now has the maximum amount of delay added to it, up to tap length of the FIR filter. Maximum-phase filters are not normally used directly. Maximum-phase filter prototypes (at least in our software) are combined with linear-phase and/or minimum-phase filter prototypes to make a single FIR filter that pushes a loudspeaker’s phase towards flat or whatever target you desire. You can even use the phase of another loudspeaker (if the aim is to match two loudspeakers).

Minimum-phase and linear-phase are the two most common filter types.

BTW, it’s probably not obvious, but in some of the leading processors the system EQ is actually implemented as a very long minimum-phase FIR filter. It’s just easier, as the system engineering is adjusting the EQ, for the processor or control software to convert the desired EQ into a minimum-phase FIR filter, rather than to try to emulate the desired EQ with a bunch of biquad IIR filter changes. These implementations make the user experience smoother, in terms of quickly facilitating exactly the EQ the user wants.

Nathan

Great. So now that we’ve defined some of these terms, I think the next thing a lot of people are going to be wondering is, “how do I use them? Where do FIR filters go?”

Michael

In a lot of cases the FIR filters are being created specifically for the speaker, not for the room. And they’re loaded via the control software for particular brand amplifiers.

Nathan

What about you personally?

Michael

For me personally, I have amplifiers – the Lab Gruppen PLM’s – that can run FIR filters (in the “FIR 3-way” module) so I would be loading FIR’s there. I don’t use custom arbitrary-phase FIR for broader system EQ in these or in a separate processor. I tend to personally use the FIR filtering for driver-based adjustments, as part of a loudspeaker preset.

However, to more broadly answer question about where to use FIR filters, we do see some customers using FIR filters in installations. Especially where the system is made up of different makes and models of loudspeakers, each with different phase characteristics. When arraying different cabinets in a larger system, it’s helpful to have the phase matching between those cabinets as part of the full system optimization. If the phase doesn’t match, there’s the risk of getting response holes in overlapping coverage areas in the room. Or at least some lowering of energy in overlapping coverage areas, particular at frequencies where the phase is dramatically different between the two cabinets. Phase matching the cabinets first makes the tuning of the coverage easier in a larger installation.

We’re also seeing some loudspeaker manufacturers phase-match all their products lines, so that end users can mix-match their products without having to think about it.

Nathan

I know that almost every Meyer Sound speaker can play together nicely because of matching phase characteristics.

Michael

Now that we have FIR filters, it makes the matching process a little easier across product lines. Yes, I think Meyer may have been one of the first to do this, but other manufacturers are definitely doing this too.

Some speaker processors and amplifiers are starting to provide dual FIR’s. That is, every output has a FIR filter for the loudspeaker preset, and a second FIR filter for array processing. A line array is the best-use case, where the first filter is used to tailor the cabinet response, and then the second bank of FIR filters (across all amplifiers feeding the line array) is used for coverage optimization within the space. Array processing is becoming a big thing in live. I think Martin’s MLA was one of the early ones and there’s AFMG’s solution FIRmaker. EAW’s Anya is another example of a fully steered array. And they’re not the only ones.

Nathan

Do you think FIR filters have a place in the work of a live sound engineer? Are there some of these things that are field applicable? Or are these only for manufacturers?

Michael

That’s a tough one. I honestly don’t know. In the installation scenario, I think it makes a lot of sense where there’s plenty of time to do many measurements and synthesize the filters to achieve a certain system result.

I think in a touring context, I’m not sure it’s as applicable or useful. If the loudspeakers are inherently phase matched and work well together anyway, and given the time constraints in a tour or a typical live setup, just getting the system EQ to be nice is top priority.

That said, considering the current minimum-phase system EQ in a large system, as you start to make minimum-phase EQ changes – particular a multizone system – you may start to get slight phase changes between regions of the system, or even (short|medium|long throw) sections of the line arrays. So maybe in the future the system EQ will move towards something that at least can maintain phase consistency across the broader PA system. I don’t know. That’s something I think might be worth considering and investigating.

Nathan

I’m sure you get a significant number of support emails. When I first started using “FIR Creator” I had a lot of questions—I was emailing you a lot—so you saw me make a lot of mistakes, and you see other people making mistakes. What do you think are some of the biggest mistakes that people are making who are new to “FIR Creator,” or any of the software or FIR filter use in general?

Michael

I wouldn’t necessarily classify it as a mistake, but I do caution people not to lean too heavily on the auto-correction functions: the Auto Magnitude tab and the Auto Phase tab.

Nathan

But that’s the most fun. It’s the single-button solution.

Michael

Yeah, I completely get it because suddenly everything magically goes flat and the response becomes just the way you want it.

But I guess my caution is because, as you know, drivers change their behaviour with level and with temperature, and a measurement taken at one spot in a room is very different to a measurement in another spot in a room. There’s so much variability in the measurement process and in the loudspeaker. A loudspeaker is a mechanical device. It wears out. It changes its behaviour over time. If you start to correct for very fine grain structure that’s in your measurement, you may be correcting perfectly for one measurement location in the room on a specific day and time, but you may make things slightly worse at other points in the room or at other levels….

Nathan

Or later in the day; temperature changes.

Michael

Yes. And so, I was even quite nervous putting those features in the software in the first place. I know that’s what people want. They want auto-correction.

Nathan

That sells a lot of software, I’m sure. I mean, the first time I saw someone use it, that’s the thing I remember. Like “oooooh, I need to get that,” and then I did, and then you made some money.

Michael

I completely get it. And yes, it is very satisfying for that measurement and you do notice it. If you’re listening at that spot where your microphone was, you notice it become flatter or clearer, or whatever other perceptive attributes you use.

However, the pro manufacturers know not to over-EQ. They have hundreds or thousands of a cabinet, all potentially with subtle variations (including production variations), and so they know that EQ’ing finely for one cabinet is not necessarily the right thing to do for a loudspeaker preset. That said, I have actually seen a couple of very fine-detail production FIR filters come to me from notable companies who I thought might have been a little more subtle in their correction in their presets.

Nathan

Here comes my second t-shirt pitch to you. How about Spiderman with a big title across the top in block letters “Auto Magnitude”, and then the subtitle, “With great power comes great responsibility”?

[image error]

Michael

That’s a very relevant one!

Nathan

Ok, just consider it. You don’t have to decide now.

I want to ask you about a lot of things specific to your software. I know that a lot of people reading this aren’t going to care because they don’t use that software right now. But that’s ok; they can skip over this part and then we’ll wrap up at the end with some other stuff.

In the very first step in the Import (and also in the Auto Magnitude tabs) you have smoothing options Complex and Power, and in the Auto Magnitude you have Complex and Mag. What’s the difference between Complex and Power, and when would I use one or the other?

Michael

You’re probably familiar with Smaart and they have a similar option to use one or the other…

Nathan

They have Complex and Polar, right?

Michael

Their Complex and Polar options relate to averaging over time. “Polar” does a dB average over time for each frequency point, which makes the time averaging more stable where wind or other mechanical disturbances are causing fluctuations, particularly phase fluctuations, over time. “Complex” does true complex average over time and is better where the measurement is stable over time. If the phase is fluctuating over time, the complex averaged result can have fluctuations in level due to frequency points that have different phase partially cancelling-reinforcing-cancelling-etc.

Nathan

So if I’m doing TF measurements outside, my magnitude trace in Smaart will be more stable if I’m using polar averaging because it ignores phase, while complex averaging will give me more accurate results when I’m measuring inside in a stable environment?

Michael

Yes, Polar will ignore phase when averaging over time to create the magnitude plot, but I suspect Smaart might be doing full complex averaging as well for calculating the phase for the phase plot. You probably want to clarify that with someone from Rational Acoustics.

The concept is very similar to measurement smoothing in “FIR Designer,” only we’re averaging over frequency (rather than time). Measurement smoothing involves frequency-localized averaging. When frequency components are averaged together, if the phase is changing quite dramatically across frequency, frequency components can partially cancel each other, lowering the energy of the smoothed result. And that’s often more evident at high frequencies. When using complex smoothing, if you start adding some delay to the measurement, you’ll start to see the energy drop quickly at the higher frequencies. However, power smoothing discards the phase completely and smooths just the energy across frequency, resulting in a magnitude that better matches what we’re hearing.

Now, to the difference in labels between the Import tab and the Auto Mag tab, the ‘Mag’ smoothing is power smoothing. I’ll update that.

Nathan

I see. So if 10kHz is at 0º and somehow 10.1kHz is at 180º and they are averaged together, there might be a cancellation?

Michael

Yes, definitely.

Nathan

And could you talk about then when I might need to use one or the other?

Michael

Use “Power” smoothing on the Import tab (or “Mag” smoothing on the Auto Mag tab) if you have a very messy measurement. By that I mean not just messy in level but particularly messy in phase. If the measurement is very messy in phase, you run the risk of losing energy in the frequency smoothing. And messy phase—messy measurements in general—often come from measurements in real rooms, as opposed to measurements done in an anechoic chamber. Loudspeaker preset measurements, often in an anechoic chamber, tend to be very clean. In-room measurements, as you know, tend to be messier, particularly from reflections.

Nathan

In the Magnitude Adjustment tab, how are the minimum-phase filters different from the IIR filters?

Michael

They’re the same. That’s an easy one. So, in the Mag Adjustment tab we provide filters everyone’s familiar with, but we give the option of changing the phase of those filters. For example, you can use a Linkwitz-Riley magnitude response, but with either linear-phase, minimum-phase, or maximum-phase. On the Magnitude Adjustment tab, I tend to refer to those as ‘filter prototypes’ (rather than ‘filters’) because they all get added together to create larger FIR filter response.

The lists are slightly different. There are certain filters on the Mag Adjustment tab that can’t be implemented as IIR’s, like the Keel-Horbach filters.

Nathan

Ah, yes, Keel-Horbach. My favorite.

 •  0 comments  •  flag
Share on Twitter
Published on October 02, 2019 10:16

September 27, 2019

6 Smart, Proven Methods To Control Feedback Onstage (Without EQ)

[image error]

There is nothing worse than spending an entire event struggling with feedback demons. You may have been taught to fight feedback with a graphic EQ, but there is a better way. Actually, that’s not true: there are six better ways. Use my guide to controlling feedback onstage and mix in fear no more.

“The feedback frequency is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them.” –Wikipedia

Method #0 – Psychology

I had to include this step 0 because the more I thought about it and the more I talked to other sound engineers, the more this came up. When it comes to improving your GBF (gain before feedback), start with the beginning of your signal chain and work forwards.

Example 1: Jason works as an AV tech on city council meetings. He was having lots of feedback problems and asked for my help. After we went through everything in the signal chain and made improvements where we could, the most important change we made was simply explaining to the council members the importance of proper microphone positioning. Nothing else we did made more of an impact than getting that first step right.

Example 2: When Brian Adler works as a monitor engineer in situations where he expects the GBF to be an issue, he will purposely start with vocal mikes way too loud in the mix. This will give the performer a little shock and start the sound check off by asking their mix level to be turned down, instead of what normally happens.

Probably the biggest tip I can give in this area is to be proactive and be a pack leader. You don’t want to wait until the stage is all set up and you are halfway through the sound check before you approach the guitarist about potentially moving his amp for a less face-melting experience. Instead, while you’re giving them a hand loading in, mention that “What we normally do here is put the guitar amp on this stand so that you can hear it well and I can get a better mix out front.”

Or for vocalists: “We’ve found that the ideal position for the monitor is with this microphone in this position. If you want it to be somewhere else, I’m totally fine with that, but it might not be able to get as loud, so we’ll have to work around that.”

Method #1 – Microphone PlacementClose Miking

For loud stages and busy rooms, close miking is generally the way to go. It might not always be the best for sound, but for the maximum gain before feedback, you have to kiss the mic. Remember, with each doubling of distance, sound level is cut in half. Plus, if you’re working mostly with Shure SM58 and SM57 microphones, that’s how they are designed to be used anyway.

For corporate audio this usually means teaching your presenter how to handle the mic. For theatre this means adjusting headworn capsule placement. I have seen sound designers successfully mic a play without headworn microphones, but it’s tricky (see How To Mic An 800 Seat Theatre With Floor Mics).

Polar Pattern [image error] From SoundOnSound

For concert sound you almost never use an omnidirectional mic. Microphones with a cardioid pickup pattern have the most rejection at the rear of the mic capsule, which should be pointed at the stage monitor.

Don’t cup the mic! This will defeat the directional pattern, turning it into an omnidirectional mic.

Corporate and theatre events require specific and stable placement of the microphone capsule. Some sound engineers argue in favor of using omnidirectional capsules on the grounds that they are easier to place and produce more reliable results with the movement of the actor. My experience is that none of that matters when the audience can’t hear the actor because you can’t get enough gain.

I’ve done a lot of musicals and concerts with omnidirectional head-worn microphones in the past, though, and it’s always a struggle. The performers can’t hear themselves, and if the audience starts clapping or singing along, chaos ensues. Why did I do this? Because it was what I had available. These days I try to let directors and event producers know way ahead of time about the limits of working with certain equipment. If possible, I’ll schedule a test so they can hear the difference in the performance space.

Method #2 – Speaker PlacementStage Monitors

Floor wedges should be placed on-axis and as close to the performer’s head as possible. I’ve heard people suggest moving the monitor away from the performer for better gain before feedback, but don’t do that. That just creates lower sound levels at their ear level, so you’ll have to turn it up louder. Most live stages are loud enough as it is, so anything you can do to lower the stage monitor level will be helpful.

[image error]

Have you ever seen those little Hotspot monitors? I haven’t seen them in a few years, but I love the idea. Put a small monitor on a stand and you significantly reduce its distance to the performer.

Sometimes, because of sightline issues or stage layout, you can’t get a monitor right in front of a performer where a cardioid microphone’s off-axis point is. This happens often with drummers and keyboard players whose instruments take up so much space and lead vocalists who want clear sightlines. This is when you need a hyper-cardioid or super-cardioid microphone and this is why many live music venues have a collection of Shure SM58 (cardioid) and Beta SM58A (supercardioid) microphones, or similar.

If you find yourself stuck with a drummer or piano player whose stage monitor is at a 90° angle to a cardioid microphone, try cheating the microphone out closer to 45° to get more rejection. If an artist requests a monitor position that is less than ideal for your microphone selection, go ahead and do it, but warn them that you may run into feedback problems and need to reconfigure the speaker and mic.

I’ve seen some pretty creative microphone and monitor placement that allow for very high gain before feedback. If you are working with acoustic instruments, ask the performers if they have any tips for placement. I used to work with a cello player in Portugal who placed the stage monitor a little behind himself so that it wasn’t pointed at his microphone but it was still aimed at his head. It worked great.

Stage monitor placement for theatre deserves its own article, but my number one tip is to start the conversation early. Explain your limitations to the production team and discuss ways to best accommodate the actors. You don’t want to realize in tech rehearsals that the actors can’t hear the musicians and that the director won’t allow downstage speakers. I often lobby for small downstage monitors straight out of the gate. I also try to make friends with the set director and builder as quickly as possible, alerting them to the fact that I’ll probably need help hiding speakers around the stage.

FOH

Make sure your FOH speakers are covering the house and not the stage. This means checking the speakers’ off-axis angles to make sure they are not spilling onto the stage or creating strong wall reflections. (See also: How To Tune A Sound System In 15 Minutes.) I’ve heard people say that all microphones must be at least six feet behind FOH, but I’ve seen it done many different ways. Some situations call for more separation and control, others less.

Method #3 – Instrument/Source Placement [image error]

If you are working with a loud rock band and you place the lead vocalist right in front of the drummer, guess what happens? Your vocal mic will be full of drums and your vocalist won’t be able to hear. This happens all the time, and explains why you see the bands on Saturday Night Live using a drum shield on that very small stage.

Your goal is to balance every source input for the performers and audience. Now let’s talk about the most frequent offenders.

Drums

Drums are loud. Some drummers are interested in harmony and balance, and will change their technique, use brushes, and dampen their instruments. Those drummers are in the minority. Why? Well, have you ever played drums? It’s fun as hell to play loud, and boring as shit to play soft, or so goes my personal experience.

If you’re on tour, you’ll need a rug and a drum shield. If you’re full-time at a venue, put absorption everywhere. Two of the noisiest venues I’ve worked at have pulled the same trick and covered their ceiling and walls with black semi-rigid duct insulation or vinyl that screws right into the wall. It made a big difference.

For more on this topic, see 5 Pro Drummers Explain How to Make a Drum Kit Quieter on Stage.

Electric Guitars

I’m a guitarist, and as such I’m fully aware of how hard it is to hear myself without the amplifier blaring. The only way I was able to handle this in my band was to learn to play without hearing. In the real world, getting a guitarist’s amp as close to their head as possible will help. Put it on a chair or milk crate. Most are open-back, so put a bunch of absorption back there.

In my interview with Larry Crane he mentions a guitarist who built a Plexiglass shield for his amp that redirected the sound upward at an angle so that he could play with feedback and do fancy things with his amp without blasting the stage. Pretty smart.

[image error]

I worked on a show last year where the guitarist made a shield for his amp from case lids and jackets. This helped it not bleed into other microphones as much.

Buford Jones is famous for doing whole tours mixing from inside a truck outside of the venue. (He’s even more famous for mixing some band called Pink Floyd.) These were large venues where they had little acoustic sound coming from the stage. The guitar amps where all in dog houses off-stage and all of the performers were on IEMs (in ear monitors). Most of us won’t experience that, but it gives you an idea of how far people will go to control sound levels on stage. If you are worried about approaching a guitarist to discuss changing their setup, just remember that asking them to turn down their amp and put it on a stand is nothing compared to removing it from the stage entirely.

Method #4 – MixStage Monitor

Most performers these days are wise to the challenges of microphone feedback on stage and will make specific requests for their monitor mix. I’ve made it a practice to not add anything to a stage monitor mix until expressly asked to, except for vocalists who almost always need reinforcement. When musicians walk in the door saying, “Just give me a mix of everything,” they likely don’t know what they need. Smile and nod.

I’ve made it through entire shows without adding anything to some performers’ stage monitors because the stage layout allowed them to hear everyone. I’ve also worked on shows where the band has skipped sound check then walked on stage expecting a complete mix. I try not to work off of assumptions and I give people only what they need, because the lower your stage volume, the better your FOH mix will be, and everyone will be happier.

FOH

In small to medium venues, you aren’t “mixing” in the classical sense, you are doing sound reinforcement. You are balancing the acoustic energy in the room for a more pleasant musical experience. From my interview with Howie Gordon:

The other thing I hear a lot about [is] guys setting the whole mix base from the drums, and in my opinion that’s the last thing you should do because the thing that immediately suffers is vocals. It’s the one instrument that can’t control its own stage volume. -Howie Gordon

And from my interview with Larry Crane:

How many times have you been blown out of the water by the mains because you’re trying to keep up with the stage? It’s like, “No, no, no! That’s not necessary.” You’re not building the mix up from the kick drum at that point. You’re building the mix down from what’s happening on the stage, and you’re filling in what’s missing, just a little bit. -Larry Crane

If you need definition on the bass guitar, roll off the low end and mix it in. If you are missing the melody from the keyboard, bring up the right hand. If the guitarist is too loud then invert the polarity and lower his volume in the house with deconstructive interference. That’s how noise cancelling headphones work.

(Just kidding! You know I’m kidding, right? If you actually try that and it works, keep it to yourself.)

Compression

Normally, I love compressors, but they raise the noise floor and reduce dynamic range, and therefore reduce gain before feedback. I would really like to use compression on lapel mics during corporate presentations, for example, but I’m often on the verge of feedback and can’t spare the gain.

Method #5 – The Holy Grail

IEMs, e-drums, synths. Done!

 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2019 09:30

How an improperly connected motor cable arced up the chain and almost ended in disaster

Subscribe on iTunesSoundCloud, Google Play or Stitcher.Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with the touring FOH sound engineer for Godsmack, Erik Rogers. We discuss how he dealt with various SPL limits while on tour, console setup, and why safety is never a compromise.

I ask:

What’s one of the biggest things you’ve learned while working with Godsmack?Tell us about the biggest or maybe most painful mistake you’ve made on the job and how you recovered.From FBSam: Ask him what kind of issues does he face on the road besides the difference in venues.Andreas: His Routing in his Desk (Input- Groups – Matrix etc)Chris: Always interested in rock vocal chains, and mixing them with an up front guitar sound. It can be difficult to separate them at times. Tips or tricks?Alfons: Does he use gates on all vocals & drums? Does he use peak or RMS compression?Rob: What beard cream is best?David: Compression: heavy or light? Pre or post EQ? What console are you on, what are you doing with bus compression?Mark: Can I train under him ?Jonathan: What tricks does he use to get that epic kick sound?[image error]

Safety is never a compromise.

Erik Rogers
NotesAll music in this episode by Godsmack.Hardware: isemcon 7150, calibrator, never portico 2, shure 91, audix d6, kelly shoe, workbag: iSEMcon emx7150, calibrator, focusrite scarlet preamp, flashlight, incense, headphones Books: The Art of Happiness by His Holiness the Dalai Lama Podcasts: How I built this, Joe Rogan, Serial KillersQuotesI don’t have the same direction I did when I was 20.Sometimes you need to get hit in the face with a brick in order to focus.I guess that’s the difference between me and a monitor guy; my customer service has a barricade in between us.You’re an American and you look at 100dB and you’re like, that’s fuck’n crazy.I calibrate one microphone every day for SPL measurement and then my roaming microphone for system tuning is not calibrated. Safety is never a compromise. Anybody who’s motivated to succeed can as long as you don’t give up, and can take a shit load of criticism.[image error]

This article How an improperly connected motor cable arced up the chain and almost ended in disaster appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:FOH Mixing: EQ it till it sounds good Help, I can’t hear the vocal! Waves Vocal Rider Plugin to the rescue. How Jim Digby and the Event Safety Alliance Can Save Your Ass
 •  0 comments  •  flag
Share on Twitter
Published on September 27, 2019 06:00

September 13, 2019

This iPhone App Helps Musicians Mix Themselves from Stage

Subscribe on iTunesSoundCloud, Google Play or Stitcher.Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with musician and cofounder of Upscale Technology, Francis Masse. We discuss how his app, Uptune, helps musicians mix themselves from the stage.

I ask:

Looking back on your career so far, what’s one of the best decisions you made to get more of the work that you really love? Uptune’s tagline is “Soundcheck made easy” and “your virtual soundman”. Most of the people in my audience are professional sound engineers and don’t need this app, but I thought it would be an interesting way to talk about the work of being a live sound engineer. If I ask you to make a list of every action you take and every decision you make from the time you walk in the door to the time you leave, what would that look like? So Francis, tell us about the genesis of Uptune. Why did it get made and how did it get made? Walk me through set up and soundcheck using Uptune What are some of the biggest mistakes you see musicians making who are trying to mix themselves from stage?[image error]

The intention behind Uptune is not to replace sound engineers, but to help musicians who don’t have a sound engineer.

Francis Masse
NotesBooks: Mastering AudioPodcasts: Startup, Startup journey

This article This iPhone App Helps Musicians Mix Themselves from Stage appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:One Critical Skill You Need to Get Stage Theatre Jobs: How Surrey University Is Training Audio Engineers 5 Pro Drummers Explain How to Make a Drum Kit Quieter on Stage ATTENTION, MUSICIANS: You need to figure out where money is made.
 •  0 comments  •  flag
Share on Twitter
Published on September 13, 2019 06:00

September 6, 2019

Spot Crossover Slopes in Smaart and Avoid Falling Sharply

Key Takeaways

Verify crossover slopes by:

Comparing them to a measurement of your DSP.Comparing them to my pre-measured templates.Calculating bandwidth.

I have often heard the advice: Avoid doubling up your crossover filters.

Most modern speakers come pre-baked with high/low-pass filters. If we insert an additional filter in our output EQ near these native filters the acoustic result will be steeper than either of the filters alone.

The danger is that we might already be planning a very narrow unity crossover between our main and sub and then be surprised when the magnitude looks like a brick wall and the phase like a pinwheel.

I’m not great at spotting these shapes, yet. To help myself practice, I measured a bunch of different crossover filters and stored them in Smaart.

[image error]

Now I can compare these filter measurements to my acoustic measurements to make sure the slopes actually ended up where I wanted them. For example, here’s an alignment using Linkwitz-Riley 24dB/oct filters at 100Hz.

[image error]

Comparing them to their corresponding filter measurements, it looks like I got what I paid for.

[image error]

Going over some of my other previous alignments I did find some asymmetrical filters. In this alignment it looks like I ended up with a combination of 36dB/oct and 48dB/oct filters.

[image error]

Here it looks like I got a 48dB/oct and a 24dB/oct combination.

[image error]Identify crossovers by measuring your DSP

You can measure your output EQ by making it the SUT (system under test). In Smaart, set the REF input of your transfer function to a loopback cable and the MIC input to the output of your DSP.

Store your trace and compare it to your acoustic result. Do they match?

Identify crossovers using these templates

To speed up the process, I already measured 80 common filters for you. Download them here. Drag them into your list of traces in Smaart.

Most of the filters are set to 80Hz so you should be able to accommodate many slopes with a little trace offset. I also measured all of the Linkwitz-Riley filters at 100Hz. If none of those match, you’ll have to measure your own.

Identify crossovers by bandwidth

We don’t normally think of a spectral crossover as having bandwidth, but it does have a clear area of interaction. This can be clearly seen if we load the traces into Phase Invaders.

[image error]

Notice that the pink Sum trace has a clear starting and ending frequency. Those of you familiar with Bob McCarthy’s summation zones will also be familiar with this concept:

Isolation: Magnitude relationship >10dB with low risk of cancellation.Transition: Magnitude relationship of 4-10dB with medium risk of cancellation.Combing: Magnitude relationship of 0-4dB with maximum risk of cancellation.

Let’s look at some example bandwidths.

Linkwitz-Riley[image error]Bessel[image error]Butterworth[image error]

This leads me to the following conclusion:

2nd order 12dB/oct filters ≈ 3.26oct bandwidth4th order 24dB/oct filters ≈ 1.65oct bandwidth8th order 48dB/oct filters ≈ 0.81oct bandwidth

Assuming your filters are symmetrical, you could use the following shortcut:

If f2 ÷ f1 ≈ 9.5 then you may have two 12dB/oct filters.If f2 ÷ f1 ≈ 3.14 then you may have two 24dB/oct filters.If f2 ÷ f1 ≈ 1.75 then you may have two 48dB/oct filters.Slope considerations

The steeper the slope…

The better maintained over distance.The more phase shift incurred.The faster and possibly more unnatural the transition.Does it work in the field?

Have you learned to identify crossover slopes in the wild (in Smaart)? How did you do it?

This article Spot Crossover Slopes in Smaart and Avoid Falling Sharply appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:Phase Alignment Science Academy Handbook: Introduction What do all of those squiggly lines mean? (a short intro to the graphs in Smaart) Smaart Beta: Will the new filter control in the delay finder help with your main+sub alignment?
 •  0 comments  •  flag
Share on Twitter
Published on September 06, 2019 07:28

August 30, 2019

120 Minutes of Sound System Tuning Q&A

#1How do you space subs in an end-fire array?What is the best configuration for a podium mic?What is the best free RTA software? Mike Reed on LinkedInAiming triangles business card#2Should I phase align each sub to the main individually?When should I fly my subs?Where do I place my mics to EQ a line array? Coherence and ReverberationQuick Line Array Tuning Level & EQ in Smaart#3Should I aim my LR line arrays in towards the center to avoid wall reflections?Why is cross-firing your mains good, but sub overlap is bad?How do you space LR mains? Where do you put the mics? Download the 3 MAPP XT designsEm Português

Q: Para criar minha curva de correção de microfone, necessito de um outro microfone que possua sua própria curva de correção?

This article 120 Minutes of Sound System Tuning Q&A appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:110 Questions about Sound System Tuning – Pt. 3 110 Questions about Sound System Tuning – Pt. 2 How To Tune A Sound System In 15 Minutes
 •  0 comments  •  flag
Share on Twitter
Published on August 30, 2019 13:14

August 29, 2019

Sound system design for the largest public event in Philadelphia’s history: The Eagles Super Bowl Parade

Subscribe on iTunesSoundCloud, Google Play or Stitcher.Support Sound Design Live on Patreon.

In this episode of Sound Design Live I talk with Chris Leonard who is Director of Audio at IMS Technology Services. We discuss his sound system design and mix for the largest public event ever in Philadelphia covering 1 mile for the Eagles Super Bowl Parade, dealing with wide dynamic energy coming through a podium mic, and how to build relationships and get more work with Project Managers.

[image error]

Yes this is a large scale event, but you’re taking the principles you already know and scaling them up.

Chris Leonard
NotesHardware: VTX speakers, CL5, QL5, BSS BLU806, iTech amplifiers, SG300 switches, Rio rackOutreachDoConnect on LinkedIn.Show that you are actively working with your posts. Make them authentic about the level you’re working at, but focus on more of the work you want to be doing. How organized are you? What equipment are you using?Follow other freelancers and companies.Send your blackout dates (vs available) every 1-3 months.Make your emails personal and interesting.Don’tExpect a response to your availability updates.Email your availability updates more than once a month.Offer to travel if the request is for someone local.QuotesThe city learned their lesson and got it right this time.With Dante over CAT5 you don’t want to go beyond 300ft, but with fiber I can go for miles.I heavily use LinkedIn.I need people who can be an A1 for a massive show and some that can be basic breakout AV tech.

This article Sound system design for the largest public event in Philadelphia’s history: The Eagles Super Bowl Parade appeared first on Sound Design Live. Sign up for free updates here.

Loved this post? Try these:Mixing Monitors for Tears for Fears Super Fun Instrument Design 3 EQ Snapshots That Will Make Your Corporate Event Mics More Transparent Using Smaart: SM58, 185, MX412
 •  0 comments  •  flag
Share on Twitter
Published on August 29, 2019 06:00