By providing a standardized, transparent, and fair way to gauge the accuracy of automated M&V techniques, EVO points the way to a flourishing ecosystem of innovative meter-based efficiency programs.
Gridium recently submitted our M&V model to independent testing via the online tool designed by Lawrence Berkeley National Laboratory and brought to the market by Efficiency Valuation Organization (EVO). Although we are gratified by the fact that our measurement and verification (M&V) diagnostics are currently the most accurate to have undergone this evaluation, we are even more excited by what this testing tool means for the future of energy efficiency.
The EVO testing tool performs out-of-sample testing, the gold standard measure of M&V model accuracy. To do this, EVO has assembled a test data suite containing interval and weather data from hundreds of buildings across a range of geographic zones and building types.
Candidate M&V models are trained on a portion of the data from each building (the sample), and then used to predict energy use in the out-of-sample portion. EVO compares the prediction to the actual out-of-sample data to determine model quality.
This type of testing closely mimics real-world M&V, in which pre-measure baseline energy use is used to estimate post-measure hypothetical energy use. Out-of-sample testing is, put simply, the only way to truly know how well an M&V model performs. Inspecting the source code for a model won’t tell you how accurate it is.
Accuracy matters, for at least three reasons:
- Greater accuracy means greater confidence that ratepayer funds are being spent on real energy savings. M&V models are used to measure savings, which are the basis for incentive payments.
- Greater accuracy means that more buildings can be included in meter-based energy efficiency programs. Only buildings whose energy use can be modeled with a high degree of accuracy are eligible for these programs. Expanding the market size will be critical to meeting our aggressive efficiency goals.
- Greater accuracy means that smaller efficiency gains can be detected. Having a more sensitive yardstick allows a broader array of EE measures to be cost-effectively deployed in programs based on metered savings.
This is the second time that Gridium’s model has come out on top in independent testing. We’re proud of the accuracy of our model, but the truth is that several of the top models have quite similar performance, and the small differences are unlikely to be significant in a real-world setting. This is great news for the industry, as it means that program administrators and implementers have access to several viable M&V options, which they can use to cross-validate results or to take advantage of special model features.
We believe that ultimately industry participants will differentiate their M&V offerings with advanced features. For example, Gridium has strong capabilities in modeling non-routine events, occupancy effects, 15-minute interval data, long-term trends, and complex weather factors such as humidity and insolation. Gridium also surrounds its M&V with the full scope of software, analytics, and expertise to ensure that meter-based programs are successful for both building operators and program administrators.
That’s why we’re so excited about the EVO test tool. Meter-based energy efficiency is a nascent industry, and a healthy competition between vendors will drive innovation. But program administrators need to be able to rely on the output of M&V tools, which determine the flow of energy efficiency dollars. EVO demonstrates that it is possible to create an open, transparent, and fair framework for testing M&V claims.
We expect that tools like EVO will grow in sophistication over time. For example, it may make sense to tailor the test data set to different program types, to measure the capabilities of specialized M&V models. A given model, for example, might have very different accuracy when tested against schools in the midwest vs. manufacturing plants in Southern California, based on how it handles factors like weather, seasonal patterns, and non-routine events.
This gets to another important but often overlooked truth about automated M&V techniques: there is no single measure of accuracy for a given model. All we can say is how accurate a model is for a given set of test data.
Which again, is why it is so important to have tools like EVO that provide a standardized, transparent, and fair testing methodology against which M&V tools can be measured and compared. Having a testing framework in place will enable a flourishing of new M&V methods and innovative efficiency programs, bringing closer the day when energy efficiency can serve as a true distributed energy resource.