Making the Map
In Chapter 16 of my book, I explain how I keep logical grouping of the segments of a line array covering given audience geometry, the processing driving those segments, and my measurement mic positions, such that there’s a direct correlation between the measured response and the DSP output I adjust to affect that response, no guesswork. (See Shade and a Haircut for a basic example of the concept). In this article I’ll explain how I extend that same logical grouping approach to the physical devices that make up a sound system (amplifiers, processors, and network switches), the patching that connects them, and the IP addresses that identify them on the control network.
Why is this important? Even if your system is 100% analog (as insisted upon by at least one of my regular FOH engineers), setting up a sound system requires connecting a lot of cables between outputs and inputs, and we want to make sure all those connections end up in the right place the first time around. This avoids unexpected results and facilitates troubleshooting - when something doesn’t work, we know where to look to solve the issue. We need a system that we will always implement when patching PA drive, so we always know what our drive patch is without having to think about it too hard (I think of my naming and patching conventions the same way many mix engineers think of their channel coloring schemes. Consistency is the key to efficiency and accuracy.)
Analog Drive Patching
I try to keep my analog drive patching symmetrical, by splitting the processor outputs in half with all the PA LEFT signals starting from Output 1, and PA RIGHT signals starting from Output (N/2+1) for a processor having N outputs. For example, a processor having 16 XLR outputs - organized in two rows on the rear panel - would get all the PA LEFT outputs starting from 1 across the top row, and all the corresponding PA RIGHT outputs starting at 9 across the bottom row.
I always assign zones in the same order (Mains, Subs, Sides, 270, Front Fill, Ground Subs, Delays), skipping anything I might not be using on a particular design. (Consequently this is more or less the order in which I typically would align the system.) For example, a show consisting of L/R mains, L/R subs, front fills and outfills would be patched into a 16-output processor (such as the Galileo 616 shown below) as follows:
Note the logical output linking (1 and 9, 2 and 10, etc). On the rear panel of the processor, these XLR connectors will form two identical rows in the same order (MAIN, SUB, OUTFILL, FRONT FILL) which I find reduces patching errors compared to patching in pairs (Main L/R on 1/2, Sub L/R on 3/4, etc). It also allows me to easily accommodate driving subs or front fills in mono - simply eliminate outputs 10 and 12 in this example). If you’re using sub-snakes, multi-core fanouts or cat5 “shuttle” adapters to run your drive lines, you’ll likely find this simplifies the cabling process. It means being able to operate the unit’s front-panel mute buttons in vertical pairs (first column of mutes is Mains, second column is Subs, etc), which I find handy. And it will have an additional hidden benefit when we move to networked audio distribution (Dante or Milan) as we’ll see shortly.
On my AHM64 processor, which has 12 physical outputs, the same patching scheme would mean PA RIGHT starting on Output 7:
Any utility outputs such as ADA feeds would go at the end (12). And of course if you’re using an 8-output processor such as an LM44, PA RIGHT would start at Output 5 under this scheme.
Adding Networked Amplifiers
Now let’s look at how I incorporate the networked components (processors and amplifiers) of a typical modern PA system. Since networked amplifiers can be assigned different names depending on what file we load into the system, we need a way to definitively identify the amplifier regardless of its assigned name. When we open the software and are ready to match the physical amps in the stage racks with the virtual amps in our design, we need to take care to make those matches accurately, and whatever naming was loaded into the amps from the last show is not a foolproof method! For that reason I use Static IP addresses for each amp, laid out logically based on the amplifiers’ physical locations, such that seeing the IP address in software tells me exactly where the amplifier is physically located. If you need some refreshing on the concepts of static IP address assignments and IP schemes, I highly recommend checking out Introduction to Show Networking by John Huntington, in which he covers both of these concepts plus a whole bunch more useful information for any audio tech working with networked gear. My networking worksheets shown in this article are a variation of the concept explained in John’s book.
Unless there’s a compelling reason to adopt a different scheme, I typically use IP addresses in the range of 192.168.1.xxx for all my networked devices. The last octet (three digits) is divided into three categories:
values from 2 to 99 for devices located at FOH
values in the 100 range for devices located Stage Left (since the snake typically runs from FOH to Stage Left, this is the first “hop” for network traffic)
values in the 200 range for devices located Stage Right
Figure 3 below shows my networking worksheet for a recent event using nine RCF XPS-16K amplifiers per side.
Notice that all the IP addresses ending in 100 are stage left (PA RIGHT) and all the addresses ending in 200 are stage right (PA LEFT). Since the amps were packaged in racks of 3, the cell outlines show the 3 racks per side. Referencing any amplifier’s IP in the software tells me exactly where it’s physically located. To match the amps and go online with the show file, I simply sort the discovered amps by IP address and drag and drop to the appropriate virtual amp in the file. Note the spare amp on each side - if an amplifier were to fail, I could simply grab the spare amp (.106 or .206) and drag and drop to replace the failed amp in the project file, repatch the speaker cable and we’re back up and running. And if, for example, the control software only reveals amplifiers in the .100 range, but no .200’s, we know there must be an issue with the cross-stage network connection.
Incorporating Redundancy
So far we’ve been using the network simply for control, but as soon as we incorporate digital signal distribution as well, via a networked audio protocol such as Dante or Milan, we want our systems to be resilient against any individual device or cable failing. Although a true switch failure is relatively rare in my experience, a poorly seated cat5e cable or dusty fiber connector is much more common, especially in the live event production world, where the gear may be exposed to the elements, and handled and moved on a daily basis, and we don’t want a single bad connection to stop the show.
Although there are multiple ways to skin this cat, full physical redundancy is probably the most common approach in the context of PA systems for live events. Although a full explanation of network redundancy is far beyond the scope of this article, here’s the general idea: We have a network switch at FOH, which connects to the network switch in the stage left amp rack, and the network switch in the stage right amp rack, often via durable touring-grade cat5e with EtherCON barrels or fiber lines with OpticalCON connectors. This is dubbed the Primary Network. Now add a second network switch at FOH, and a second to the amp racks on both sides of the stage, and connect those three together in the same fashion. This is the Secondary network.
Our networked audio devices (consoles, processors and amplifiers) are designed to accommodate redundant network and so have two network ports that operate independently, so each device has a connection to both the Primary and Secondary networks, which both carry the same traffic. If either network experiences a failure, the other network is still intact and the performance is uninterrupted. Thus a cable can be replaced or reseated, or a network switch restarted, without stopping the show. Figure 4 below, a screenshot from Luminex Araneo software, shows an example of a basic redundant network topology using GigaCore network switches, a popular choice for such applications.
Note that there are other ways to implement various types of full or partial redundancy based on specific application requirements but for clarity we’ll just stick with this one (two completely physically separate networks).
How does redundant networking fit into our IP scheme? We will change the third octet to 2 for any devices on the redundant network. Note that in the screenshot above, the last octet still obeys the rule we laid out before (2 - 99 = FOH, 100’s = SL, 200’s = SR) but all the switches on the redundant network have an IP address of 192.168.2.xxx instead of the 192.168.1.xxx for the primary network. Thus, the device’s IP address tells us both its physical location (fourth octet) and whether it’s a member of the primary or secondary network (third octet). So the first amplifier or processor on Stage Left would be assigned an IP of 192.168.1.101 for its primary network adapter, and 192.168.2.101 for its secondary network adapter, and so on.
Some network devices may not have a secondary adapter, in which case we’ll just give them a 192.168.1.xxx address without a corresponding .2.
Putting it All Together
Now let’s see how all these concepts come together in a real-world design example. The system I’m working with at time of writing is driven by d&b D80 amplifiers, which can accept both digital (AES3) and analog inputs. The front end processing is a pair of Meyer GALAXY processors, which send redundant Milan streams as well as analog backup. A physically redundant network (as described above) transports the Milan signals to the DS20 units on both sides of the stage, which convert the Milan signal to AES to feed the amps digitally. The analog backup from the Galaxy feeds the analog amplifier inputs directly. Thus the system has triple redundancy - if Primary Milan were to fail, Secondary Milan would keep the show going, and if for some reason both were to fail, the amps can be driven via the analog backup.
The amplifiers themselves do not have redundant control connections so they’ll all be connected to the primary network only. The control computer at FOH has two network adapters installed (one for each network) so it can address and control any device on either network. The IP worksheet for the entire system looks like this (Figure 5):
The colors correspond to the color coding system used on the PA carts and trunks for the tour: red for Stage Right, blue for Stage Left and Green for FOH, which helps us coordinate gear pushing to the right spot in the venue as it comes off the truck. Notice the .1. and .2. IP pairs for all the redundant devices (Galaxy processors, network switches, control computer and DS20 network bridges), and the single IP assignments for the amplifiers and inclinometers.
In addition to the network IP, the amplifiers also use a Remote ID consisting of a Subnet and a Device ID which is how the devices communicate with the R1 control software. Notice that the subnets match the front end DSP patching scheme I outlined above: PA LEFT starting on 1 for main, 2 for subs, etc, and PA RIGHT starting at 9 for main, 10 for subs, etc. These subnets match the corresponding Galaxy outputs feeding each zone. So the amp’s IP and ID tell me its physical location, what part of the system it’s driving, and what signal it’s being fed. When it comes to digital audio transport, we reap an additional benefit of patching all the PA LEFT elements sequentially, rather than alternating between left and right: Milan signals in this system are transmitted in streams of 8 channels, so this patching scheme allows me to send one stream to PA LEFT and one stream to PA RIGHT and avoid “crossing the streams.” This also reduces the likelihood of making a patching error. Figure 6 below is a screenshot from Milan Manager software, showing how the Milan stream containing Galaxy outputs 9-16 is driving the Stage Left DS20 for all the PA RIGHT signals.
Finally, a note about security: John Huntington advises against prominent front-panel IP labels for network devices for security reasons. Although many systems engineers use a WiFi access point for roaming system adjustments, I ensure any such APs are disabled before doors open, at which point my network is air gapped and connectivity requires physical access to the network switches.
And it should go without saying - the approach detailed here is a system developed to fit my specific workflow and applications. I encourage readers working in a similar capacity to consider their own approach - and stick to it!