Skip to content

From fixed purpose, to mixed purpose: What we really mean by flexible infrastructures

broadcast operators during a live event

*This article was published on December 1, 2022, on SVG Europe.

In the old days — 10 years ago — you built a broadcast facility for a purpose. Everything in that room was organised around that one purpose, and that was pretty much all it did for several years.

Back then, you might build a studio around a particular show — maybe a daily talk show or magazine. The systems designers built the studio to do that programme perfectly. But when it was not making that programme — which might be for 22 hours a day — it would be idle. It was a fixed-purpose facility. Or you might build a new outside broadcast truck, built with a lead client in mind. It could pick up other events but only if they were similar. Everything was built, named, organised around that purpose.  Build, configure and then operate for years. Life was so simple.

But today we want more. We want trucks to do HD, to be capable of 1080p and also UHD, with or without HDR; to be doing football one day, esports the next, and so on. Studios support multiple shows, and control rooms support multiple studios, and every day is a new adventure — driven by the financial need to make the same facility investment support as much content creation as possible.

Sharing cameras between studios has been common for years, but today’s facility planners expect a whole different level of flexibility. In the old days, SDI routers came in fixed sizes and determined what could be shared and not shared. But IP fabrics for media, leveraging COTS datacentre switches to connect everything together, can remove those barriers. In modern SMPTE ST2110 IP facilities, any camera can go to any (or every) switcher in any (or every) control room. Any multiviewer PiP can access any signal from any place in the plant. The technical roadblocks are gone — allowing the sharing to be as complicated as the production team can manage.

Managing this complexity is in fact the new modern challenge — figuring out how to capture the setups and configurations to put the studio back the same way for the same show tomorrow, or how to set up a different control room for the same show tomorrow in a reasonable amount of time.

If control rooms are swapped and traded for different productions, then flexibility should mean that the operating positions can take on different roles; for example, a show that needs a lot of graphics operators, followed by something that needs a lot of replays. This ripples into the design concepts for the multiviewers, and the KVMs, and intercom, and all the other parts of the design.

The good news is that with IP connectivity and software-enabled functionality, every single piece of equipment is routable. That makes the facility — at least in theory — infinitely flexible.  The challenge is in taming this power and managing the complexity.

Turning engineering into the department of ‘Yes’

The corollary of this vastly increased capability is that some of the organising that we previously did only once (at build-time) is now done daily as part of assigning the studio. We now must organise the turnover of the facility, automating what can be automated and check-listing the rest. We must plan how to set up and use all this infinitely flexible functionality, so the facility is ready for the crew.

Planning is key. What we have found — and we have been involved in quite a few practical implementations now — is that, in principle, the workflow can be anything you want; in practice, however, it helps to think through what workflows you will really use and use those as design guidelines. If you need the facility to flex to many things, pick a few edge cases and detail them out. You can then use those workflows as templates to adapt to new requirements.

When the facility is sufficiently flexible, the next question is do I have enough resources? Can I call on enough processing, enough recording inputs, enough storage? If the answer is yes, then when someone comes along with a very specialised request, you can build on your proven workflow setups and make your facility do what they need.

The ultimate goal in all of this is to turn facility engineering into the department of ‘Yes’. Yes, we can put that there. Yes, we can throw another multiviewer over to that operator position. Yes, we can drive that replay system from this seat. Production wants to try new things, see what works. If they like it, they will do three of it next week.

The requirement to constantly adapt is the new normal: no two shows are the same; no two directors want the monitor wall set up the same way. But by leveraging smart control systems to script and automate these setups, show-to-show turnup can be managed. Imagine’s investments in our Magellan Control System ― especially around Magellan Touch, Live Manager and PathView ― are all geared around managing the operational complexity, scripting and automating what the operator sees, and getting the right controls in front of the right operator based on the context of today’s show.

Role of the cloud

Finally, while media headlines tell us that everything will move to the cloud, the pace of that transition will vary considerably depending on the exact details and workflows.

Channel origination — taking programme and interstitial files and organising them into a branded channel — is a natural cloud operation. Even when your premium channel needs interactive management, (for example, switching between packaged and live programming, adjusting the ad breaks), it is manageable thanks to very low-latency codecs like JPEG XS. You see many television operations moving to the cloud — or the on-prem datacentre organised like a cloud — some for resiliency, some for primary playout, and some for both.

But what about up-stream? Live production? Just like with playout, the decision to build a function on premises or in the cloud (or both) is a trade-off that includes human factors, economics and risk management.

The humans tend to live on the ground and have developed work styles that (until very recently) involved being right next to the talent in a zero-delay loop with their teammates in the production. These human factors favour being on premises. Where the talent (in front of the camera or behind it) wants to live and work also plays a role.  There are important non-technical factors involved in these choices.

The pandemic forced us to conduct the experiment, and we learned that we ‘can’ do things remotely and distributed. But time will tell how much we really keep doing that or how much we move back to on-prem. Focusing the cameras, adjusting the lighting, positioning the microphones — these are all easier to do when somebody is physically there.

But beyond these human factors, the balance of ground and cloud is economic. If you are only going to do one show for an hour once a week, then spinning up the bulk of it in the cloud has attractive economics. But if you can have two or three shows a day sharing the control room — which is the goal — then the maths can look very different. And it’s not an either/or trade off; there is every possibility in between, doing some things in cloud and others on prem.

Ultimately, the balance of economics, risk management and human factors will be different for every situation. But wherever the hardware resides, the operational flexibility to say ‘yes’ to whatever production dreams up, when they need it, on budget and reliably, is the key point in modern infrastructures.

Imagine Communications

Stay connected!

Be the first to know about latest news, updates, and more. Subscribe now!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.