Part 4 of a 4-Part Series
This is the last in a series of blogs suggesting that a good way for broadcasters to gain experience with cloud technology is by implementing remote disaster recovery. In this final part, I want to look at making it happen.
The conversation over the last few years has all been about software-defined architectures: specialist software running on COTS hardware to achieve broadcast-quality operations. One of the advantages of this philosophy is that you can design intuitive user interfaces tuned to everyone’s specific needs, which will provide the same experience whether the hardware responding is on premises or in the cloud.
There is an extension to that point. When you disassociate the control and the action – a user interface on a computer or laptop, communicating with massive processing in the cloud – then you can give the operator a consistent user interface, even if the underlying infrastructure is different in the primary and secondary installations.
Consistent operation is especially important in business continuity. If disaster strikes, the last thing you want is for operators to scrabble around trying to make sense of an unfamiliar system. Performance and user interaction must be exactly the same wherever the processes are actually being performed.
This disassociation also means you can set your own SLA, determining resilience and availability by channel. You might want your premium channels to switch over to disaster recovery in seconds, for example, while some of your secondary channels can be left for a while. That is a business decision for you: we can help you find the right cost-benefit balance.
Putting disaster recovery playout in the cloud is a natural first step. It allows broadcasters to develop the skills needed to move content and schedules and work with cloud suppliers to fine tune their systems for broadcast.
It also means that everyone in the organization gains confidence in the cloud as a suitable platform for broadcasters. Routine rehearsals of business continuity will mean that operators will learn how much similarity there is in performance of the cloud and on-premises systems, and how the user interface seamlessly switches from one to the other.
This experience gives confidence to move on toward a completely cloud future. Because of the cloud’s effectively infinite scalability, pop-up channels can be created in minutes not months, so it is easy to service sports events or music festivals, for example. You can test-market 4K and HDR UHD, and set the marginal extra costs in delivery against the potential for new revenues. All the while, you are only paying for processor time when you need it.
In conclusion, then, we can now use the cloud as an effective playout system that performs just as a traditional, on-premises legacy playout network would do, with the same user interface and responsiveness. It is inherently suited to remote working – if you are a global broadcaster, you could even eliminate night shifts by moving operations around the world every eight hours.
That cloud scalability means you can add new functionality, like artificial intelligence applications, to create metadata or to automate captioning, for instance.
In the end, though, moving to the cloud is an exercise in understanding total cost of ownership (TCO), and using that to steer your progress. Take into account the real estate for racks of equipment, the power to drive them, air conditioning to keep them cool, and specialist staff on shift to maintain it all. Plus there is the need to manage operating system and software upgrades without risk to the output.
Couple the lower TCO with the boost in resilience and the convenience of remote access, and it is clear why the cloud will become the norm for content delivery in future. Building a disaster recovery playout solution in the cloud is a natural first step.