Many FMs are often seeking benchmarking data. When pressed as to what they want, they exclaim, “A table, showing metrics such as costs per unit area for my type of building, broken down into maintenance, utilities and other areas.”
If it were only as easy as that! Usually, once an FM is given such a chart, the next question becomes something like, “Do you have that broken down for just large facilities?” Once you provide that, the question becomes, “What about for facilities in my type of climate and city?”
What is really happening here is that the FM is realizing that the more general numbers work for the average facility (whatever that may be), but to make the numbers useful, a more detailed breakdown is necessary. That is the only way one can compare the subject facility to one that is best-in-class.
We define that “breakdown” as using a set of filters. Each of the above items becomes a filter: type of building, size of facility, climate/city, etc. In actuality, there are nearly 50 potentially useful filters for looking at utility metrics, and more that are germane to other cost centers, such as maintenance, janitorial, security, landscaping, etc.
Bottom line: There is no magic table that contains all the critical data, as it could never be a two-dimensional table. For utilities, it may have to have as many as 50 dimensions, although for any one facility, probably no more than ten would be necessary. But for each type of building, there may be a different subset of the ten. The conclusion is that the filter set for any one building will likely be different from the one for any other building, so there is no standard table that can work for everyone.
Benchmarking of operating costs is popular among FMs, as these are what the FM can both measure and control. More than 95 percent of operational expenses are incurred by:
- Utilities
- Maintenance
- Janitorial
- Security
An Example
Let’s look at how the FM can benchmark, given the above conclusions. We will illustrate an example for utilities using tools provided courtesy of FM BENCHMARKING. For this example, we will benchmark a 1,325,000 gross square feet (GSF) office building that is 22 years old, operating 18 hours per day. Some of the input fields are shown in Figure 1 below.
Now, let’s see how judicious use of filters to select the most appropriate peer group can impact our benchmarked results. First we will turn on just a couple of filters to see where we stand. We’ll use a size filter of 600,000 GSF or greater and a suburban setting. As shown in Figure 2, our Total Utility Cost is a little below the second performance quartile at $2.58 per square foot and median cost for this peer group is $2.66 per square foot.
What other factors might impact energy expenses? Since our building operates for 18 hours a day (more than many), let’s filter the hours of operation to see how we will compare with this peer group. As shown in Figure 3, the median cost increases to $3.01 per square foot and we have now moved to the left and are closer to the center of the second quartile.
Perhaps the age of the facilities in our peer group might impact the cost of utilities? Since our facility is 22 years old we will add that as a filter (to filter out the newer buildings). As shown in Figure 4, our median costs increase to $3.50 per square foot and we are right at the dividing line between the first and second quartile performance.
This seems like a reasonable peer group for comparison purposes.
But we don’t want to stop the process there! For all we’ve done is find out how we’re doing compared to our peer group (filter set). So let’s consider what could be done to improve our performance—isn’t that the purpose of this exercise? To do this, we will look at the best practices that have been implemented by our peer group. As shown in Figure 5, we will compare our best practices with those implemented in our quartile and those in the next better quartile.
Shown are just a few of the nearly 40 best practices for utilities. Where our facility has answered NO and there is a high percentage for our quartile and the next better performing quartile, we should consider implementing the practice. For example, we answered NO to motion sensors in the general office space and 75 percent of our peer group has implemented this practice while 86 percent of the first performance quartile participants have implemented it. So this may be something we should look at.
These examples are meant to show how you can use benchmarking with filters to narrow your benchmarked comparisons to a valid peer group and then see which best practices others have implemented. By implementing those best practices, your performance should also improve over time.