When we in Media & Entertainment think about the cost of cloud computing, we often turn to the subjects of virtualization, security, and bandwidth.  These costs — coupled with the challenges of performance, redundancy and scale — tend to drive the overall perspective of a tech-heavy industry like M&E. But how do the other sectors think about cloud, the workplace, and the workforce?

While researching the impact of cloud and digital transformation, I began to wonder if there were rules of thumb that dictate how we should think about facility costs when I came across the 3:30:300 rule (most often written as 3-30-300).  This rule was originally proposed by JLL (Jones, Lang, LaSalle), a commercial real estate company with more than 4.6 billion square feet of managed properties.  The 3:30:300 was developed as a way to express the orders of magnitude between a company’s costs, i.e., $3 for utilities, $30 for rent, and $300 for payroll (all per square foot, per year).  The idea is that the absolute costs will go up or down based on location and industry, but the relative proportions hold true.  

The 3:30:300 rule is used to evaluate operational costs and efficiency, but also serves as a guide for focusing investment.  Consider the following: According to the 3:30:300 rule, reducing energy by 10% drives $0.30 savings per square foot; a 10% decrease in rent will save $3.00 per square foot; and a 10% gain in productivity is worth $30 per square foot.  Obviously, focusing on productivity is one key to unlocking any company’s potential — and is therefore worthy of consideration when thinking about migration to the cloud. If you can improve access to systems, decrease downtime, or improve operational efficiency of just your systems, you can directly and materially improve productivity.  

Understanding data center costs

Because much of our industry is technology heavy and compute dense, I felt like we should have some form of metric that matters at the data center level, at least as a discussion point. I could not find a direct comparable, so I derived a cloud cost modeling metric from what information I could find.

I started with a fairly complete and understood model for a hyperscale data center consisting of a committed 8 MW of power and nearly 50,000 servers. This data center represents one book of economic leverage with purchasing scale and operational efficiency that no private data center can match.  (Note: There are more than 500 hyperscale data centers globally.1)

The servers in Hamilton’s modeling were commodity 1RU servers at $1,500 each.  This is not a good reflection of the types of servers used in broadcast, which are typically several times more expensive and much more power dense.  The other big-ticket items such as power and networking have gone up in price since Hamilton’s blog was posted, and would be much more expensive for the small data center that lacks the purchasing power of the hyperscale data center.  Our understanding of cooling has also improved, and we are willing to run our servers much hotter. But this is only possible in state-of-the-art environments with highly engineered and managed cooling solutions in place.  

The importance of server density

The biggest difference between cloud costs in M&E vs. other sectors, however, is server density. Anecdotally, many of the data centers we have seen in our industry are not running anywhere near the utilization or density of the hyperscale data centers.  We have seen numbers ranging from 20% to 60% utilization.  The bottom line is that lacking the ability to run at the efficiencies of the hyperscale data center, your costs will be higher — potentially much higher — creating a far bigger impact than Moore’s Law would have had on the models used in Hamilton’s blog. And most of the recent innovations that would have a greater impact (power/cooling/custom servers) are only accessible to the hyperscale operators. 

In the AWS model, the actual server count is closer to 48,000; even if you pack those servers as densely as possible into the standard rack cabinet, you are still going to have nearly 1,150 racks or around 28,000 square feet of data center space.  The operating costs of the data center are given at $3.5M/month, so each square foot costs $139.00/month or $1,674.00 annually.  This is discounting the actual costs of the applications, application maintenance, backup, redundancy, etc.  This is purely about coming up with a metric that represents a high-density data center — keeping in mind that anything smaller will have much higher costs.  

According to a blog by James Hamilton, VP and Distinguished Engineer, Amazon Web Services (AWS), cost modeling for a large-scale data center breaks out like this: Servers comprise 57% of all expenses in the data center, with the other big-ticket items being power distribution and cooling (18%), power itself (13%), and networking (8%).  The sample 50,000 server buildout represents a bookend of cost efficiency, but it uses a very conservative cost model.  While there have been a few changes in the years since Hamilton posted this blog, it still serves as a solid reference for understanding data center costs. 

So we now have a metric: $1,674.00 per square foot per year at the efficiency of a hyperscale data center. That’s just the physical side as it were.  

Now we can look at the operating side, and according to Rackspace, for every $1 you spend on capex for your own data center, you are going to spend an additional $2.00 to manage and secure it.  That works out to approximately $5,000 per square foot per year for your private data center — again, if you can run at hyperscale efficiency and great utilization.  If you are running at 50% power utilization efficiency, your costs will more than double.  Industry speculation puts costs at closer to $12,000/square foot for small, inefficient data centers, but to be conservative, we round that to $3,000/square foot for the data center annually.  Not only does this roll off the tongue better, it’s also probably closer to the consensus being used in the industries outside of M&E.  

Unexpected benefits

JLL has begun to add “3000”2 to their original rule in presentations, using that order of magnitude to represent the unexpected benefits of highly interconnected building systems.  Schneider Electric has talked about the “3000” several times in relationship to better management of power in the data center.  Our own industry’s data centers probably run a bit hotter. The fact that we run massively redundant systems that are on 24/7 probably means that we are on the high end of the spectrum, and can safely adopt 3000 — because the reality is that we are probably higher.

With our new rule of thumb expressed as 3:30:300:3000, we now have some way to more intuitively understand the cost benefits of a move to cloud, and we can start to work with a bit more targeted focus in our analysis.  Regardless of whether 3000 is an actual number, or merely a convenient reference point, the fact is that it represents an order of magnitude or more of relative costs in running our businesses and should be a primary point of comparison as we consider the move to cloud.

Since we are only talking about the physical side of the data center and not the applications acquisition, provisioning, and management, the costs translate directly into cloud-comparable pricing. The message here is pretty clear and consistent: you can expect 50% or greater return on cost of operations in five years.   It doesn’t matter whether you are talking Azure, AWS, or Rackspace — each of them is forecasting that same 50% or greater savings in cost on the physical side alone. But we now know that these savings come from being able to run at efficiencies that are almost impossible to achieve without the scale and operating efficiencies of the hyperscale organizations.

Imagine that you could align application utilization to 100% efficiency, along with best-of-breed security, IT management, and high availability practices.  In our 3,000-square-foot data center model, this is a net savings of $4.5M over five years. And that is definitely worth thinking about.

 

Footnotes

1Yevgeniy Sverdlik, “Analysts: There are Now More than 500 Hyperscale Data Centers in the World,” Oct 17, 2019

2Mike Welch, “Can your ‘Smart IoT’ building achieve JLL's latest 3:30:300:3000 rule?” August 2019

March 23, 2020 - By Imagine Communications

Imagine Communications places cookies on your computer to help make our website better. By continuing to browse our website we will assume that you agree to the placing of these cookies. For more information on these cookies, including how to manage your cookie preferences through your browser settings, please see our Cookie Policy