Skip to content

 
 
 
IMAGINE EXECUTIVE BLOG SERIES

How we adopt AI:
practical, responsible, and built around you 

little robots on a grid

There isn’t a customer meeting that goes by where AI doesn’t come up in the conversation. Sometimes the AI comment is excitement about product plans — what are we doing, what’s coming, and how can it help? But just as often, it’s a different set of questions: 

  • Do you utilize AI within your product feature set or are you using AI for development?  If so, how do we know our data and intellectual property are protected? 
  • How do we know you’re not putting our content and data into a public model? 
  • Can you certify there’s no generative AI in the products we run on air? 

These conversations matter. And how we answer them says a lot about how we work. 

At Imagine Communications, we’ve been deliberate about AI from the start. Not because we are overly cautious, but because our customers depend on us to make rational decisions regarding how our products affect their operations, their content, and their revenue. When we talk about innovation at Imagine, we’re not talking about technology for technology’s sake. We’re focused on purpose-led innovation — the kind that solves real problems, improves real workflows, and gives customers confidence they’re building on solid ground. AI is part of that story, but it’s not the whole story. 

Our AI strategy centers on two principles: efficiency and scale. Efficiency means empowering teams by automating repetitive tasks so customers can focus on work that moves the business forward — not replacing staff, but enabling them to do more of what matters. Scale means helping customers tackle challenges that humans simply can’t perform manually, like analyzing thousands of scheduling permutations or surfacing insights buried deep in complex datasets. 

So here’s where Imagine is today on AI technology. 

We’ve been working with AI
for longer than you might think

In 2023, our customer support team deployed an AI-assisted tool for our Tier 1 agents. It’s a private, contained, machine-learning environment — similar in feel to ChatGPT, but completely walled off from public models and configured to draw only from our own knowledge bases: product documentation, known customer issues, and reported bugs. It learns from our environment, not from yours. 

The results so far have been impressive. We’re seeing a more than 15% improvement in First Contact Resolution — the metric that measures whether a customer’s issue gets resolved on the first call, not the second or third. For our customers, that means less downtime, less friction, and faster answers. We’ve also extended a limited version of this directly to customers through our support portal, where it accelerates navigation to the right answer while keeping any sensitive data completely protected. 
 

AI-based product development, acceleration, and quality assurance

Inside our R&D organization, AI is changing how our software engineers develop our products. Tools like GitHub Copilot are helping our engineers port code, modernize databases, and generate test cases at a pace that would have taken a team months to replicate manually. A database migration that historically required six people and six months — we are now doing it with one engineer in three to four weeks. To customers, this means our products evolve faster with more features, and our ability to validate and deliver quality products increases. 

But that speed doesn’t come at the expense of discipline. We’ve been deliberate about configuring these tools so that our intellectual property stays inside our walls. Nothing is shared back to public models. We’re building internal review processes to evaluate every AI tool we bring in for data protection before it touches anything customer sensitive. The efficiency gains are real, and so are the guardrails. 
 

Putting AI to work inside our products

That same efficiency-and-scale thinking drives how we’re building AI into our products. In Landmark Rights & Scheduling, our new AI-assisted scheduling tool is unlocking capabilities that were previously out of reach — faster scheduling cycles, less repetitive manual work, and more time for high-value editorial and strategic decisions. It analyzes historical scheduling patterns, learns how operators manage their inventory, and begins to automate the lower-value work, freeing teams to focus where it matters most.  

On the video side, AI is showing up in targeted, high-value areas: QC automation, diagnostics, AI-generated test scripts, and smarter support workflows. These improvements help us raise service levels and reinforce our commitment to delivering the highest quality care in the industry. 

And innovation at Imagine goes well beyond AI. We’re leading the industry in multisite orchestration and automation, advancing intelligent multiviewing through Prismon, and investing in open, API-driven workflows across our portfolio — making large IP networks easier to deploy, operate, and scale. Practical. Secure. Measurable. Grounded in real customer needs. 
 

New AI‑assisted capabilities within Landmark™️Rights & Scheduling leverage machine learning (ML) to streamline repeatable scheduling tasks while keeping humans firmly in the decision‑making loop.

woman working on two computer monitors, one showing AI the other showing the Imagine Landmark R&S software

The right way to talk about AI
is to not overpromise it

Our customers are sophisticated. They’ve watched vendors chase AI headlines before, and they know the difference between a press release and a production deployment. Some are actively concerned about generative AI — worried about copyright exposure, content integrity, and what happens if AI-generated material ends up on air. Those concerns are legitimate, and we share them. 

We do not deploy generative AI in the content chain. When customers ask, we can explain exactly how our tools are structured, what data protections are in place, and why. In one recent case, a customer needed written confirmation that there was no generative AI in the specific products they run on-air. We could provide that with confidence, because we’d made those decisions deliberately long before anyone asked. 

Trust isn’t built by announcing AI initiatives. It’s built by using AI in ways that genuinely benefit customers, that respect their data and their operations, and that can be explained and verified. That’s the standard we hold ourselves to — and it’s the same standard that runs through everything you’ve read in this series. 

Related Products

Rights & Scheduling

Landmark™ Rights & Scheduling

Enable rights management and scheduling for premium-quality live, streaming and on-demand channels and content.
Landmark™ Rights & Scheduling

Production Infrastructure

Prismon

Prismon is a software-defined multiviewer & convergent A/V monitoring platform for broadcast, headend, and OTT environments.
Prismon
A headshot of Brendon Mills

Brendon Mills

Chief Product Officer

Brendon Mills is the Chief Product Officer for Imagine Communications. In this role, Brendon oversees the strategic direction of the company’s market-leading Video and Ad Tech solution portfolios.

Loading...