Many FMs are often seeking benchmarking data to measure how their organization is performing. When asked about what they want to learn, they often will reply that a table or chart showing their performance compared to others what they want.
If it were only as easy as that! Usually, once an FM is provided a chart, the next question becomes something like, “Do you have that broken down for just: industry type older facilities in my city that operate two shifts cleaned with union labor cleaned one time per week with around 1500 employees etc. ?”
What is really happening here is that the FM is realizing that general numbers may be a good starting point, but to really make informed decisions a more detailed breakdown by criteria that affect operating costs is necessary. That is the only way one can compare the benchmarked facility to one that is best-in-class.
We define that “breakdown” as using a set of filters. Each of the FM questions above becomes a filter:
- Industry type
- Age of the facility
- City or region
- Hours of operation
- Union or non-union labor
- Number of employees
In actuality, there are nearly 50 potentially useful filters for looking at maintenance metrics, and more that are germane to other operating costs, such as utilities, janitorial, security, landscaping, etc.
For FMs there isn’t any single table that contains all the critical data needed for a detailed benchmarking analysis. For maintenance, there may be 30-40 critical dimensions that could be applied and the importance probably varies by the situation. For example, most FMs would probably expect higher costs at an older facility. But if you have recently replaced many of the key systems and are gaining the benefits from an extensive upgrade, age would not be a significant factor.
The conclusion, then, is that the filter set for any one building will likely be different from the one for any other building, so there is no standard table that can work for everyone. Furthermore, it is likely that the filter set for studying maintenance may very well be different than the one for studying janitorial, utilities or any other aspect of a facility.
Benchmarking of operating costs has always been popular among FMs, as these are what the FM can both measure and control. More than 95 percent of operational expenses are incurred by:
- Utilities
- Maintenance
- Janitorial
- Security
An Example
Let’s look at how the FM can benchmark, given the above conclusions. We will illustrate an example for a janitorial analysis using tools provided courtesy of FM BENCHMARKING. For this example, we will benchmark a 1,325,000 gross square feet (GSF) office building that is 22 years old, operating 18 hours per day. Some of the input fields are shown in Figure 1 below.
The first filter we will turn on is the size of the facility so that we will only consider buildings that are 600,000 GSF or greater and a ‘campus setting.’ In the FM BENCHMARKING system this produces 452 facilities for comparison. In Figure 2, our consumption is shown by the yellow bar (difficult-to-see) with $1.38 per cleanable square foot which indicates a performance in the fourth quartile, but close to the start of the third quartile.
What other factors might impact our janitorial expenses? Since our building is an office facility, let’s filter (compare) ourselves only with other office facilities, as we suspect that many other types of facilities have lower janitorial costs. As shown in Figure 3, our costs are now near the mid range of the third quartile which is a significant shift in our benchmarked performance (our actual costs of $1.38 per cleanable sq. ft. has not changed, but by filtering out non-office buildings, we appear to be doing better; in reality, we are just getting closer to defining our most accurate filter set). Comparing our facility to other office facilities is valid comparison that would make sense to senior management when you are presenting your performance results.
Our janitorial staff is represented by a union so for our next comparison we will consider only those organizations that are represented by a union workforce (our rationale is that non-union labor may cost less; if so, it may be a good idea to remove them from our filter set).
This filter moves our performance only slightly to the left. Since labor costs usually comprise between 80 and 90 percent of janitorial costs, a reasonable conclusion from this set of filters is that there is a slight, but not significant difference in labor costs between union and non-union janitorial employees.
Based on the above, we may want to include or not include non-union workforce in our metrics; because of other external requests, we decide to filter-out the non-union workforce. This seems like a reasonable peer group for comparison purposes, and there is quite a difference in the relative ranking from our initial comparison. By careful application of filters, which are a reflection of our true peer group, our facility has moved from about the 80th percentile to about the 60th percentile. Beyond Knowing How Our Facility Compares to Others
Many FMs don’t go any further with the benchmarking process, but nothing we’ve done so far will help improve your performance. All we’ve done is to find out how we’re doing compared to our peer group (filter set). So let’s consider what could be done to improve our performance—isn’t that the purpose of this exercise? To do this, we will look at the best practices that have been implemented by our peer group, and see which ones the better-performing buildings are doing.
FM BENCHMARKING provides a very useful tool to integrate best practices responses with the quartile results. As shown in Figure 4, we will compare our best practices with those implemented in our quartile as well as those in the next better-performing quartile.
Shown are just a few of the best practices for janitorial services. Where our facility has answered NO and there is a high percentage for our quartile and the next better performing quartile, we should consider implementing the practice. For example, we answered NO for ‘Cleaning frequencies are established for all areas based on user needs.’ Yet 93% of the participants in our quartile and 93% of the next better performing quartile have implemented this best practice. So this may be something we should look at.
We also should see where there is a large jump in percentage between those in our quartile and those in the next better-performing quartile. These best practices are those that have enabled others to improve their performance and move to the next quartile.
These examples are meant to show how you can use benchmarking with filters to narrow your benchmarked comparisons to a valid peer group and then see which best practices others have implemented. By implementing those best practices, your performance should also improve over time.