|
Hi. Can I hang out in this thread? I mix live sports for 3-letter TV networks. Please don’t dox me.
|
# ¿ Sep 26, 2022 13:56 |
|
|
# ¿ May 13, 2024 18:11 |
|
That’s a Calrec Apollo. It rules. Nothing else has the flexibility and routing capacity I need for a big show. Here’s the one that runs the Oscars broadcast every year. I filled both rows of top-surface faders for a golf event.
|
# ¿ Sep 26, 2022 14:13 |
|
The Science Goy posted:Which channel is used for the fake bird sounds on the golf broadcasts? I know the guy.
|
# ¿ Sep 26, 2022 14:20 |
|
cruft posted:Yeah but did you see the one at the superbowl? It comes with experience and doing things the same way every time to build muscle memory. Working on anyone else’s setup always feels like suicide. I group my people first because they go in the 5.1 center and can generally get compressed and Dugan’d together. From there I group people who might require lipsync adjustment (RF cameras tend to be 60-200ms late, for example) The “field sound” get laid out between “mics to establish ambience,” and “mics I ride to pick up cool situational sounds” Everything gets grouped so I can build aux feeds for IFB listens and such that need to be different for each listener, but still controlled relative to the main mix. The thing that’s compounded the difficulty in the last few years is maintaining audio sync with the video-gizmos that add significant processing delay. The ball tracer on golf, the virtual white line in football, the strike zone outline in baseball, the virtual signage painted on the ice for hockey. All of those things need to be delayed for, but only when they’re onscreen. The video switcher can generate GPI inputs to trigger that, but a lot of thought goes into structuring audio groups so the switch is seamless and inaudible. I hear those mistakes and hiccups constantly on local baseball broadcasts and it drives me nuts.
|
# ¿ Sep 26, 2022 15:00 |
|
Think about some scenarios: a wireless camera is shooting a sideline reporter. There’s some processing delay in the RF transport, maybe half a second. The mic attached to the camera comes back embedded in the video and arrives in sync. The mic in the reporters hand reaches the truck in real-time. That’s an easy fix: delay the reporters mic 150ms to air, but not to the IFB. Lips are in sync, reporter doesn’t hear an echo. All is well. Make it a little more complicated and involve multiple cameras in a baseball game: Only one camera is sent through the processing to overlay the strike zone box. It’s a computationally intensive effect, and often needs a special tripod that sends a stream of tracking data. So now only our one special camera arrives 500ms behind the 20 or more other camera signals in the truck. I could just delay the mics behind home plate 500ms, but what happens when we see someone steal home seen on a real-time camera, but we hear it late off the same mics I’ve delayed? We could maybe delay *every* video source to match the slowest one, but that’s an expensive thing to do. Video is much more costly to delay than audio. The best compromise anyone has found is to use a GPI to insert “500ms” of delay onto the infield mics only when the special camera is on air. Crowd or ambient mics are kept constant to cover any sound of the switch. I also have to be careful not to make fast fader moves because if the director cuts from a real-time camera to a delayed one, it’s possible to hear that fader move twice. This is my favorite thing. eddiewalker fucked around with this message at 20:07 on Sep 26, 2022 |
# ¿ Sep 26, 2022 20:00 |